00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3893 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3488 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.106 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.107 The recommended git tool is: git 00:00:00.107 using credential 00000000-0000-0000-0000-000000000002 00:00:00.109 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.183 Fetching changes from the remote Git repository 00:00:00.185 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.243 Using shallow fetch with depth 1 00:00:00.244 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.244 > git --version # timeout=10 00:00:00.295 > git --version # 'git version 2.39.2' 00:00:00.295 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.328 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.328 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.661 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.671 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.683 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:07.683 > git config core.sparsecheckout # timeout=10 00:00:07.694 > git read-tree -mu HEAD # timeout=10 00:00:07.712 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:07.730 Commit message: "kid: add issue 3541" 00:00:07.730 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:07.816 [Pipeline] Start of Pipeline 00:00:07.829 [Pipeline] library 00:00:07.831 Loading library shm_lib@master 00:00:07.831 Library shm_lib@master is cached. Copying from home. 00:00:07.845 [Pipeline] node 00:00:07.856 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.857 [Pipeline] { 00:00:07.864 [Pipeline] catchError 00:00:07.865 [Pipeline] { 00:00:07.873 [Pipeline] wrap 00:00:07.879 [Pipeline] { 00:00:07.889 [Pipeline] stage 00:00:07.891 [Pipeline] { (Prologue) 00:00:07.911 [Pipeline] echo 00:00:07.913 Node: VM-host-SM38 00:00:07.920 [Pipeline] cleanWs 00:00:07.931 [WS-CLEANUP] Deleting project workspace... 00:00:07.931 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.938 [WS-CLEANUP] done 00:00:08.201 [Pipeline] setCustomBuildProperty 00:00:08.295 [Pipeline] httpRequest 00:00:08.634 [Pipeline] echo 00:00:08.635 Sorcerer 10.211.164.101 is alive 00:00:08.643 [Pipeline] retry 00:00:08.644 [Pipeline] { 00:00:08.655 [Pipeline] httpRequest 00:00:08.660 HttpMethod: GET 00:00:08.660 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:08.661 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:08.678 Response Code: HTTP/1.1 200 OK 00:00:08.678 Success: Status code 200 is in the accepted range: 200,404 00:00:08.679 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:17.012 [Pipeline] } 00:00:17.031 [Pipeline] // retry 00:00:17.040 [Pipeline] sh 00:00:17.330 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:17.349 [Pipeline] httpRequest 00:00:17.794 [Pipeline] echo 00:00:17.796 Sorcerer 10.211.164.101 is alive 00:00:17.807 [Pipeline] retry 00:00:17.809 [Pipeline] { 00:00:17.824 [Pipeline] httpRequest 00:00:17.830 HttpMethod: GET 00:00:17.830 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:17.831 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:17.851 Response Code: HTTP/1.1 200 OK 00:00:17.852 Success: Status code 200 is in the accepted range: 200,404 00:00:17.852 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:06.048 [Pipeline] } 00:01:06.065 [Pipeline] // retry 00:01:06.073 [Pipeline] sh 00:01:06.362 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:09.687 [Pipeline] sh 00:01:09.975 + git -C spdk log --oneline -n5 00:01:09.975 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:09.975 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:09.975 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:09.975 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:09.975 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:09.996 [Pipeline] withCredentials 00:01:10.007 > git --version # timeout=10 00:01:10.021 > git --version # 'git version 2.39.2' 00:01:10.042 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:10.044 [Pipeline] { 00:01:10.053 [Pipeline] retry 00:01:10.055 [Pipeline] { 00:01:10.071 [Pipeline] sh 00:01:10.358 + git ls-remote http://dpdk.org/git/dpdk main 00:01:10.372 [Pipeline] } 00:01:10.390 [Pipeline] // retry 00:01:10.395 [Pipeline] } 00:01:10.411 [Pipeline] // withCredentials 00:01:10.421 [Pipeline] httpRequest 00:01:10.846 [Pipeline] echo 00:01:10.848 Sorcerer 10.211.164.101 is alive 00:01:10.858 [Pipeline] retry 00:01:10.860 [Pipeline] { 00:01:10.875 [Pipeline] httpRequest 00:01:10.881 HttpMethod: GET 00:01:10.881 URL: http://10.211.164.101/packages/dpdk_41dd9a6bc2d9c6e20e139ad713cc9d172572dd43.tar.gz 00:01:10.882 Sending request to url: http://10.211.164.101/packages/dpdk_41dd9a6bc2d9c6e20e139ad713cc9d172572dd43.tar.gz 00:01:10.883 Response Code: HTTP/1.1 200 OK 00:01:10.884 Success: Status code 200 is in the accepted range: 200,404 00:01:10.885 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_41dd9a6bc2d9c6e20e139ad713cc9d172572dd43.tar.gz 00:01:16.922 [Pipeline] } 00:01:16.939 [Pipeline] // retry 00:01:16.946 [Pipeline] sh 00:01:17.232 + tar --no-same-owner -xf dpdk_41dd9a6bc2d9c6e20e139ad713cc9d172572dd43.tar.gz 00:01:18.631 [Pipeline] sh 00:01:18.915 + git -C dpdk log --oneline -n5 00:01:18.915 41dd9a6bc2 doc: reorganize prog guide 00:01:18.915 cb9187bc5c version: 24.11-rc0 00:01:18.915 b3485f4293 version: 24.07.0 00:01:18.915 fa58aec335 doc: add tested platforms with NVIDIA NICs 00:01:18.915 ae3e05c916 doc: add tested Intel platforms with Intel NICs 00:01:18.933 [Pipeline] writeFile 00:01:18.946 [Pipeline] sh 00:01:19.232 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:19.245 [Pipeline] sh 00:01:19.535 + cat autorun-spdk.conf 00:01:19.535 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.535 SPDK_TEST_NVME=1 00:01:19.535 SPDK_TEST_FTL=1 00:01:19.535 SPDK_TEST_ISAL=1 00:01:19.535 SPDK_RUN_ASAN=1 00:01:19.535 SPDK_RUN_UBSAN=1 00:01:19.535 SPDK_TEST_XNVME=1 00:01:19.535 SPDK_TEST_NVME_FDP=1 00:01:19.535 SPDK_TEST_NATIVE_DPDK=main 00:01:19.536 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:19.536 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.544 RUN_NIGHTLY=1 00:01:19.545 [Pipeline] } 00:01:19.554 [Pipeline] // stage 00:01:19.563 [Pipeline] stage 00:01:19.565 [Pipeline] { (Run VM) 00:01:19.573 [Pipeline] sh 00:01:19.855 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:19.855 + echo 'Start stage prepare_nvme.sh' 00:01:19.855 Start stage prepare_nvme.sh 00:01:19.855 + [[ -n 3 ]] 00:01:19.855 + disk_prefix=ex3 00:01:19.855 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:19.855 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:19.855 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:19.855 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.855 ++ SPDK_TEST_NVME=1 00:01:19.855 ++ SPDK_TEST_FTL=1 00:01:19.855 ++ SPDK_TEST_ISAL=1 00:01:19.855 ++ SPDK_RUN_ASAN=1 00:01:19.855 ++ SPDK_RUN_UBSAN=1 00:01:19.855 ++ SPDK_TEST_XNVME=1 00:01:19.855 ++ SPDK_TEST_NVME_FDP=1 00:01:19.855 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:19.855 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:19.855 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.855 ++ RUN_NIGHTLY=1 00:01:19.855 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:19.855 + nvme_files=() 00:01:19.855 + declare -A nvme_files 00:01:19.855 + backend_dir=/var/lib/libvirt/images/backends 00:01:19.855 + nvme_files['nvme.img']=5G 00:01:19.855 + nvme_files['nvme-cmb.img']=5G 00:01:19.855 + nvme_files['nvme-multi0.img']=4G 00:01:19.855 + nvme_files['nvme-multi1.img']=4G 00:01:19.855 + nvme_files['nvme-multi2.img']=4G 00:01:19.855 + nvme_files['nvme-openstack.img']=8G 00:01:19.855 + nvme_files['nvme-zns.img']=5G 00:01:19.855 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:19.855 + (( SPDK_TEST_FTL == 1 )) 00:01:19.855 + nvme_files["nvme-ftl.img"]=6G 00:01:19.855 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:19.855 + nvme_files["nvme-fdp.img"]=1G 00:01:19.855 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:19.855 + for nvme in "${!nvme_files[@]}" 00:01:19.855 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:19.855 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.855 + for nvme in "${!nvme_files[@]}" 00:01:19.855 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:01:20.420 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:20.420 + for nvme in "${!nvme_files[@]}" 00:01:20.420 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:20.420 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.420 + for nvme in "${!nvme_files[@]}" 00:01:20.420 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:20.420 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.678 + for nvme in "${!nvme_files[@]}" 00:01:20.678 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:20.678 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.678 + for nvme in "${!nvme_files[@]}" 00:01:20.678 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:20.678 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.678 + for nvme in "${!nvme_files[@]}" 00:01:20.678 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:20.678 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.678 + for nvme in "${!nvme_files[@]}" 00:01:20.678 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:01:20.678 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:20.678 + for nvme in "${!nvme_files[@]}" 00:01:20.678 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:20.936 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.936 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:20.936 + echo 'End stage prepare_nvme.sh' 00:01:20.936 End stage prepare_nvme.sh 00:01:20.946 [Pipeline] sh 00:01:21.224 + DISTRO=fedora39 00:01:21.224 + CPUS=10 00:01:21.224 + RAM=12288 00:01:21.224 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.225 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:21.225 00:01:21.225 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:21.225 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:21.225 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:21.225 HELP=0 00:01:21.225 DRY_RUN=0 00:01:21.225 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:01:21.225 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:21.225 NVME_AUTO_CREATE=0 00:01:21.225 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:01:21.225 NVME_CMB=,,,, 00:01:21.225 NVME_PMR=,,,, 00:01:21.225 NVME_ZNS=,,,, 00:01:21.225 NVME_MS=true,,,, 00:01:21.225 NVME_FDP=,,,on, 00:01:21.225 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.225 SPDK_VAGRANT_VMCPU=10 00:01:21.225 SPDK_VAGRANT_VMRAM=12288 00:01:21.225 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.225 SPDK_VAGRANT_HTTP_PROXY= 00:01:21.225 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.225 SPDK_OPENSTACK_NETWORK=0 00:01:21.225 VAGRANT_PACKAGE_BOX=0 00:01:21.225 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.225 FORCE_DISTRO=true 00:01:21.225 VAGRANT_BOX_VERSION= 00:01:21.225 EXTRA_VAGRANTFILES= 00:01:21.225 NIC_MODEL=e1000 00:01:21.225 00:01:21.225 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:21.225 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:23.763 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.024 ==> default: Creating image (snapshot of base box volume). 00:01:24.024 ==> default: Creating domain with the following settings... 00:01:24.024 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727518918_3e645906bca59f5c0e91 00:01:24.024 ==> default: -- Domain type: kvm 00:01:24.024 ==> default: -- Cpus: 10 00:01:24.024 ==> default: -- Feature: acpi 00:01:24.024 ==> default: -- Feature: apic 00:01:24.024 ==> default: -- Feature: pae 00:01:24.024 ==> default: -- Memory: 12288M 00:01:24.024 ==> default: -- Memory Backing: hugepages: 00:01:24.024 ==> default: -- Management MAC: 00:01:24.024 ==> default: -- Loader: 00:01:24.024 ==> default: -- Nvram: 00:01:24.024 ==> default: -- Base box: spdk/fedora39 00:01:24.024 ==> default: -- Storage pool: default 00:01:24.024 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727518918_3e645906bca59f5c0e91.img (20G) 00:01:24.024 ==> default: -- Volume Cache: default 00:01:24.024 ==> default: -- Kernel: 00:01:24.024 ==> default: -- Initrd: 00:01:24.024 ==> default: -- Graphics Type: vnc 00:01:24.024 ==> default: -- Graphics Port: -1 00:01:24.024 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.024 ==> default: -- Graphics Password: Not defined 00:01:24.024 ==> default: -- Video Type: cirrus 00:01:24.024 ==> default: -- Video VRAM: 9216 00:01:24.024 ==> default: -- Sound Type: 00:01:24.024 ==> default: -- Keymap: en-us 00:01:24.024 ==> default: -- TPM Path: 00:01:24.024 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.024 ==> default: -- Command line args: 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:24.024 ==> default: -> value=-drive, 00:01:24.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:24.024 ==> default: -> value=-drive, 00:01:24.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:24.024 ==> default: -> value=-drive, 00:01:24.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.024 ==> default: -> value=-drive, 00:01:24.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.024 ==> default: -> value=-drive, 00:01:24.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:24.024 ==> default: -> value=-drive, 00:01:24.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:24.024 ==> default: -> value=-device, 00:01:24.024 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.322 ==> default: Creating shared folders metadata... 00:01:24.322 ==> default: Starting domain. 00:01:26.232 ==> default: Waiting for domain to get an IP address... 00:01:41.133 ==> default: Waiting for SSH to become available... 00:01:41.133 ==> default: Configuring and enabling network interfaces... 00:01:44.434 default: SSH address: 192.168.121.39:22 00:01:44.434 default: SSH username: vagrant 00:01:44.434 default: SSH auth method: private key 00:01:46.983 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:53.583 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:58.877 ==> default: Mounting SSHFS shared folder... 00:02:00.794 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:00.794 ==> default: Checking Mount.. 00:02:02.178 ==> default: Folder Successfully Mounted! 00:02:02.178 00:02:02.178 SUCCESS! 00:02:02.178 00:02:02.178 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:02.178 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:02.178 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:02.178 00:02:02.188 [Pipeline] } 00:02:02.203 [Pipeline] // stage 00:02:02.212 [Pipeline] dir 00:02:02.213 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:02.214 [Pipeline] { 00:02:02.229 [Pipeline] catchError 00:02:02.231 [Pipeline] { 00:02:02.245 [Pipeline] sh 00:02:02.530 + vagrant ssh-config --host vagrant 00:02:02.530 + sed -ne '/^Host/,$p' 00:02:02.530 + tee ssh_conf 00:02:05.087 Host vagrant 00:02:05.087 HostName 192.168.121.39 00:02:05.087 User vagrant 00:02:05.087 Port 22 00:02:05.087 UserKnownHostsFile /dev/null 00:02:05.087 StrictHostKeyChecking no 00:02:05.087 PasswordAuthentication no 00:02:05.087 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:05.087 IdentitiesOnly yes 00:02:05.087 LogLevel FATAL 00:02:05.087 ForwardAgent yes 00:02:05.087 ForwardX11 yes 00:02:05.087 00:02:05.103 [Pipeline] withEnv 00:02:05.106 [Pipeline] { 00:02:05.122 [Pipeline] sh 00:02:05.405 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:05.405 source /etc/os-release 00:02:05.405 [[ -e /image.version ]] && img=$(< /image.version) 00:02:05.405 # Minimal, systemd-like check. 00:02:05.405 if [[ -e /.dockerenv ]]; then 00:02:05.405 # Clear garbage from the node'\''s name: 00:02:05.405 # agt-er_autotest_547-896 -> autotest_547-896 00:02:05.405 # $HOSTNAME is the actual container id 00:02:05.405 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:05.405 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:05.405 # We can assume this is a mount from a host where container is running, 00:02:05.405 # so fetch its hostname to easily identify the target swarm worker. 00:02:05.405 container="$(< /etc/hostname) ($agent)" 00:02:05.405 else 00:02:05.405 # Fallback 00:02:05.405 container=$agent 00:02:05.405 fi 00:02:05.405 fi 00:02:05.405 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:05.405 ' 00:02:05.675 [Pipeline] } 00:02:05.688 [Pipeline] // withEnv 00:02:05.694 [Pipeline] setCustomBuildProperty 00:02:05.707 [Pipeline] stage 00:02:05.709 [Pipeline] { (Tests) 00:02:05.725 [Pipeline] sh 00:02:06.006 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:06.278 [Pipeline] sh 00:02:06.560 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:06.836 [Pipeline] timeout 00:02:06.837 Timeout set to expire in 50 min 00:02:06.839 [Pipeline] { 00:02:06.853 [Pipeline] sh 00:02:07.137 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:07.706 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:07.718 [Pipeline] sh 00:02:07.999 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:08.272 [Pipeline] sh 00:02:08.554 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.831 [Pipeline] sh 00:02:09.114 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:09.375 ++ readlink -f spdk_repo 00:02:09.375 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.375 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.375 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.375 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.375 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.375 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.375 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.375 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:09.375 + cd /home/vagrant/spdk_repo 00:02:09.375 + source /etc/os-release 00:02:09.375 ++ NAME='Fedora Linux' 00:02:09.375 ++ VERSION='39 (Cloud Edition)' 00:02:09.375 ++ ID=fedora 00:02:09.375 ++ VERSION_ID=39 00:02:09.375 ++ VERSION_CODENAME= 00:02:09.375 ++ PLATFORM_ID=platform:f39 00:02:09.375 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:09.375 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.375 ++ LOGO=fedora-logo-icon 00:02:09.375 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:09.375 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.375 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:09.375 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.375 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.375 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.375 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:09.375 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.375 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:09.375 ++ SUPPORT_END=2024-11-12 00:02:09.375 ++ VARIANT='Cloud Edition' 00:02:09.375 ++ VARIANT_ID=cloud 00:02:09.375 + uname -a 00:02:09.375 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:09.375 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:09.895 Hugepages 00:02:09.895 node hugesize free / total 00:02:09.895 node0 1048576kB 0 / 0 00:02:09.895 node0 2048kB 0 / 0 00:02:09.895 00:02:09.895 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.895 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.895 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.895 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:09.895 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:09.895 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:09.895 + rm -f /tmp/spdk-ld-path 00:02:10.152 + source autorun-spdk.conf 00:02:10.152 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.152 ++ SPDK_TEST_NVME=1 00:02:10.152 ++ SPDK_TEST_FTL=1 00:02:10.152 ++ SPDK_TEST_ISAL=1 00:02:10.152 ++ SPDK_RUN_ASAN=1 00:02:10.152 ++ SPDK_RUN_UBSAN=1 00:02:10.152 ++ SPDK_TEST_XNVME=1 00:02:10.152 ++ SPDK_TEST_NVME_FDP=1 00:02:10.152 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:10.152 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.152 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.152 ++ RUN_NIGHTLY=1 00:02:10.152 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.152 + [[ -n '' ]] 00:02:10.152 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:10.152 + for M in /var/spdk/build-*-manifest.txt 00:02:10.152 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.152 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.152 + for M in /var/spdk/build-*-manifest.txt 00:02:10.152 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.152 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.152 + for M in /var/spdk/build-*-manifest.txt 00:02:10.152 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.152 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.152 ++ uname 00:02:10.152 + [[ Linux == \L\i\n\u\x ]] 00:02:10.152 + sudo dmesg -T 00:02:10.152 + sudo dmesg --clear 00:02:10.152 + dmesg_pid=5764 00:02:10.152 + [[ Fedora Linux == FreeBSD ]] 00:02:10.152 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.152 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.152 + sudo dmesg -Tw 00:02:10.152 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.152 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.152 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.152 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.152 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.152 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.152 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.152 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.152 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.152 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.152 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.152 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.152 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.152 Test configuration: 00:02:10.152 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.152 SPDK_TEST_NVME=1 00:02:10.152 SPDK_TEST_FTL=1 00:02:10.152 SPDK_TEST_ISAL=1 00:02:10.152 SPDK_RUN_ASAN=1 00:02:10.152 SPDK_RUN_UBSAN=1 00:02:10.152 SPDK_TEST_XNVME=1 00:02:10.152 SPDK_TEST_NVME_FDP=1 00:02:10.152 SPDK_TEST_NATIVE_DPDK=main 00:02:10.152 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.152 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.152 RUN_NIGHTLY=1 10:22:44 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:10.153 10:22:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.153 10:22:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:10.153 10:22:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.153 10:22:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.153 10:22:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.153 10:22:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.153 10:22:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.153 10:22:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.153 10:22:44 -- paths/export.sh@5 -- $ export PATH 00:02:10.153 10:22:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.153 10:22:44 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.153 10:22:44 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:10.153 10:22:44 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727518964.XXXXXX 00:02:10.153 10:22:44 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727518964.Ui22sf 00:02:10.153 10:22:44 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:10.153 10:22:44 -- common/autobuild_common.sh@485 -- $ '[' -n main ']' 00:02:10.153 10:22:44 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:10.153 10:22:44 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:10.153 10:22:44 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.153 10:22:44 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.153 10:22:44 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:10.153 10:22:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:10.153 10:22:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.153 10:22:44 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:10.153 10:22:44 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:10.153 10:22:44 -- pm/common@17 -- $ local monitor 00:02:10.153 10:22:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.153 10:22:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.153 10:22:44 -- pm/common@25 -- $ sleep 1 00:02:10.153 10:22:44 -- pm/common@21 -- $ date +%s 00:02:10.153 10:22:44 -- pm/common@21 -- $ date +%s 00:02:10.153 10:22:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727518964 00:02:10.153 10:22:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727518964 00:02:10.153 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727518964_collect-cpu-load.pm.log 00:02:10.153 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727518964_collect-vmstat.pm.log 00:02:11.526 10:22:45 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:11.526 10:22:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.526 10:22:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.526 10:22:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:11.526 10:22:45 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.527 Sat Sep 28 10:22:45 AM UTC 2024 00:02:11.527 10:22:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.527 v25.01-pre-17-g09cc66129 00:02:11.527 10:22:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:11.527 10:22:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:11.527 10:22:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:11.527 10:22:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.527 10:22:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.527 ************************************ 00:02:11.527 START TEST asan 00:02:11.527 ************************************ 00:02:11.527 using asan 00:02:11.527 10:22:45 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:11.527 00:02:11.527 real 0m0.000s 00:02:11.527 user 0m0.000s 00:02:11.527 sys 0m0.000s 00:02:11.527 ************************************ 00:02:11.527 END TEST asan 00:02:11.527 10:22:45 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:11.527 10:22:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.527 ************************************ 00:02:11.527 10:22:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.527 10:22:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.527 10:22:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:11.527 10:22:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.527 10:22:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.527 ************************************ 00:02:11.527 START TEST ubsan 00:02:11.527 ************************************ 00:02:11.527 using ubsan 00:02:11.527 10:22:45 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:11.527 00:02:11.527 real 0m0.000s 00:02:11.527 user 0m0.000s 00:02:11.527 sys 0m0.000s 00:02:11.527 10:22:45 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:11.527 ************************************ 00:02:11.527 END TEST ubsan 00:02:11.527 ************************************ 00:02:11.527 10:22:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.527 10:22:46 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:11.527 10:22:46 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:11.527 10:22:46 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:11.527 10:22:46 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:11.527 10:22:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.527 10:22:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.527 ************************************ 00:02:11.527 START TEST build_native_dpdk 00:02:11.527 ************************************ 00:02:11.527 10:22:46 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:11.527 41dd9a6bc2 doc: reorganize prog guide 00:02:11.527 cb9187bc5c version: 24.11-rc0 00:02:11.527 b3485f4293 version: 24.07.0 00:02:11.527 fa58aec335 doc: add tested platforms with NVIDIA NICs 00:02:11.527 ae3e05c916 doc: add tested Intel platforms with Intel NICs 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc0 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc0 21.11.0 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 21.11.0 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:11.527 patching file config/rte_config.h 00:02:11.527 Hunk #1 succeeded at 70 (offset 11 lines). 00:02:11.527 10:22:46 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc0 24.07.0 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 24.07.0 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:11.527 10:22:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:11.528 10:22:46 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc0 24.07.0 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc0 '>=' 24.07.0 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.528 10:22:46 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:11.528 10:22:46 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:11.528 patching file drivers/bus/pci/linux/pci_uio.c 00:02:11.528 10:22:46 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:11.528 10:22:46 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:11.528 10:22:46 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:11.528 10:22:46 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:11.528 10:22:46 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:15.774 The Meson build system 00:02:15.774 Version: 1.5.0 00:02:15.774 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:15.774 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:15.774 Build type: native build 00:02:15.774 Program cat found: YES (/usr/bin/cat) 00:02:15.774 Project name: DPDK 00:02:15.774 Project version: 24.11.0-rc0 00:02:15.774 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:15.774 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:15.774 Host machine cpu family: x86_64 00:02:15.774 Host machine cpu: x86_64 00:02:15.774 Message: ## Building in Developer Mode ## 00:02:15.774 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.774 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:15.774 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.774 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:15.774 Program cat found: YES (/usr/bin/cat) 00:02:15.774 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:15.774 Compiler for C supports arguments -march=native: YES 00:02:15.774 Checking for size of "void *" : 8 00:02:15.774 Checking for size of "void *" : 8 (cached) 00:02:15.774 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:15.774 Library m found: YES 00:02:15.774 Library numa found: YES 00:02:15.774 Has header "numaif.h" : YES 00:02:15.774 Library fdt found: NO 00:02:15.774 Library execinfo found: NO 00:02:15.774 Has header "execinfo.h" : YES 00:02:15.774 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.774 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.774 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.774 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.774 Run-time dependency openssl found: YES 3.1.1 00:02:15.774 Run-time dependency libpcap found: YES 1.10.4 00:02:15.774 Has header "pcap.h" with dependency libpcap: YES 00:02:15.774 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.774 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.774 Compiler for C supports arguments -Wformat: YES 00:02:15.774 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.774 Compiler for C supports arguments -Wformat-security: NO 00:02:15.774 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.774 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.774 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.774 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.774 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.774 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.774 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.774 Compiler for C supports arguments -Wundef: YES 00:02:15.774 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.774 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.774 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.774 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.774 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.774 Program objdump found: YES (/usr/bin/objdump) 00:02:15.774 Compiler for C supports arguments -mavx512f: YES 00:02:15.774 Checking if "AVX512 checking" compiles: YES 00:02:15.774 Fetching value of define "__SSE4_2__" : 1 00:02:15.774 Fetching value of define "__AES__" : 1 00:02:15.774 Fetching value of define "__AVX__" : 1 00:02:15.774 Fetching value of define "__AVX2__" : 1 00:02:15.774 Fetching value of define "__AVX512BW__" : 1 00:02:15.774 Fetching value of define "__AVX512CD__" : 1 00:02:15.774 Fetching value of define "__AVX512DQ__" : 1 00:02:15.774 Fetching value of define "__AVX512F__" : 1 00:02:15.774 Fetching value of define "__AVX512VL__" : 1 00:02:15.774 Fetching value of define "__PCLMUL__" : 1 00:02:15.774 Fetching value of define "__RDRND__" : 1 00:02:15.774 Fetching value of define "__RDSEED__" : 1 00:02:15.774 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:15.774 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.774 Message: lib/log: Defining dependency "log" 00:02:15.774 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.774 Message: lib/argparse: Defining dependency "argparse" 00:02:15.774 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.774 Checking for function "getentropy" : NO 00:02:15.774 Message: lib/eal: Defining dependency "eal" 00:02:15.774 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:15.774 Message: lib/ring: Defining dependency "ring" 00:02:15.774 Message: lib/rcu: Defining dependency "rcu" 00:02:15.774 Message: lib/mempool: Defining dependency "mempool" 00:02:15.774 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.774 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:15.774 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:15.774 Compiler for C supports arguments -mpclmul: YES 00:02:15.774 Compiler for C supports arguments -maes: YES 00:02:15.774 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.774 Compiler for C supports arguments -mavx512bw: YES 00:02:15.774 Compiler for C supports arguments -mavx512dq: YES 00:02:15.774 Compiler for C supports arguments -mavx512vl: YES 00:02:15.774 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.774 Compiler for C supports arguments -mavx2: YES 00:02:15.774 Compiler for C supports arguments -mavx: YES 00:02:15.774 Message: lib/net: Defining dependency "net" 00:02:15.774 Message: lib/meter: Defining dependency "meter" 00:02:15.774 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.774 Message: lib/pci: Defining dependency "pci" 00:02:15.774 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.774 Message: lib/metrics: Defining dependency "metrics" 00:02:15.774 Message: lib/hash: Defining dependency "hash" 00:02:15.774 Message: lib/timer: Defining dependency "timer" 00:02:15.774 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.774 Message: lib/acl: Defining dependency "acl" 00:02:15.774 Message: lib/bbdev: Defining dependency "bbdev" 00:02:15.774 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:15.774 Run-time dependency libelf found: YES 0.191 00:02:15.774 Message: lib/bpf: Defining dependency "bpf" 00:02:15.774 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:15.774 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.774 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.774 Message: lib/distributor: Defining dependency "distributor" 00:02:15.774 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.774 Message: lib/efd: Defining dependency "efd" 00:02:15.774 Message: lib/eventdev: Defining dependency "eventdev" 00:02:15.774 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:15.774 Message: lib/gpudev: Defining dependency "gpudev" 00:02:15.774 Message: lib/gro: Defining dependency "gro" 00:02:15.774 Message: lib/gso: Defining dependency "gso" 00:02:15.774 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:15.774 Message: lib/jobstats: Defining dependency "jobstats" 00:02:15.774 Message: lib/latencystats: Defining dependency "latencystats" 00:02:15.774 Message: lib/lpm: Defining dependency "lpm" 00:02:15.774 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512IFMA__" : 1 00:02:15.774 Message: lib/member: Defining dependency "member" 00:02:15.774 Message: lib/pcapng: Defining dependency "pcapng" 00:02:15.774 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.774 Message: lib/power: Defining dependency "power" 00:02:15.774 Message: lib/rawdev: Defining dependency "rawdev" 00:02:15.774 Message: lib/regexdev: Defining dependency "regexdev" 00:02:15.774 Message: lib/mldev: Defining dependency "mldev" 00:02:15.774 Message: lib/rib: Defining dependency "rib" 00:02:15.774 Message: lib/reorder: Defining dependency "reorder" 00:02:15.774 Message: lib/sched: Defining dependency "sched" 00:02:15.774 Message: lib/security: Defining dependency "security" 00:02:15.774 Message: lib/stack: Defining dependency "stack" 00:02:15.774 Has header "linux/userfaultfd.h" : YES 00:02:15.774 Has header "linux/vduse.h" : YES 00:02:15.774 Message: lib/vhost: Defining dependency "vhost" 00:02:15.774 Message: lib/ipsec: Defining dependency "ipsec" 00:02:15.774 Message: lib/pdcp: Defining dependency "pdcp" 00:02:15.774 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.774 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.774 Message: lib/fib: Defining dependency "fib" 00:02:15.774 Message: lib/port: Defining dependency "port" 00:02:15.774 Message: lib/pdump: Defining dependency "pdump" 00:02:15.774 Message: lib/table: Defining dependency "table" 00:02:15.774 Message: lib/pipeline: Defining dependency "pipeline" 00:02:15.774 Message: lib/graph: Defining dependency "graph" 00:02:15.774 Message: lib/node: Defining dependency "node" 00:02:15.774 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.774 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.774 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.774 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.693 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:17.693 Compiler for C supports arguments -Wno-unused-value: YES 00:02:17.693 Compiler for C supports arguments -Wno-format: YES 00:02:17.693 Compiler for C supports arguments -Wno-format-security: YES 00:02:17.693 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:17.693 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:17.693 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:17.693 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:17.693 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.693 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.693 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.693 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:17.693 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:17.693 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:17.693 Has header "sys/epoll.h" : YES 00:02:17.693 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:17.693 Configuring doxy-api-html.conf using configuration 00:02:17.693 Configuring doxy-api-man.conf using configuration 00:02:17.693 Program mandb found: YES (/usr/bin/mandb) 00:02:17.693 Program sphinx-build found: NO 00:02:17.693 Configuring rte_build_config.h using configuration 00:02:17.693 Message: 00:02:17.693 ================= 00:02:17.693 Applications Enabled 00:02:17.693 ================= 00:02:17.693 00:02:17.693 apps: 00:02:17.693 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:17.693 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:17.693 test-pmd, test-regex, test-sad, test-security-perf, 00:02:17.693 00:02:17.693 Message: 00:02:17.693 ================= 00:02:17.693 Libraries Enabled 00:02:17.693 ================= 00:02:17.693 00:02:17.693 libs: 00:02:17.693 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:17.693 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:17.693 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:17.693 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:17.693 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:17.693 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:17.693 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:17.693 graph, node, 00:02:17.693 00:02:17.693 Message: 00:02:17.693 =============== 00:02:17.693 Drivers Enabled 00:02:17.693 =============== 00:02:17.693 00:02:17.693 common: 00:02:17.693 00:02:17.693 bus: 00:02:17.693 pci, vdev, 00:02:17.693 mempool: 00:02:17.693 ring, 00:02:17.693 dma: 00:02:17.693 00:02:17.693 net: 00:02:17.693 i40e, 00:02:17.693 raw: 00:02:17.693 00:02:17.693 crypto: 00:02:17.693 00:02:17.693 compress: 00:02:17.693 00:02:17.693 regex: 00:02:17.693 00:02:17.693 ml: 00:02:17.693 00:02:17.693 vdpa: 00:02:17.693 00:02:17.693 event: 00:02:17.693 00:02:17.693 baseband: 00:02:17.693 00:02:17.693 gpu: 00:02:17.693 00:02:17.693 00:02:17.693 Message: 00:02:17.693 ================= 00:02:17.693 Content Skipped 00:02:17.693 ================= 00:02:17.693 00:02:17.693 apps: 00:02:17.693 00:02:17.693 libs: 00:02:17.693 00:02:17.693 drivers: 00:02:17.693 common/cpt: not in enabled drivers build config 00:02:17.693 common/dpaax: not in enabled drivers build config 00:02:17.693 common/iavf: not in enabled drivers build config 00:02:17.693 common/idpf: not in enabled drivers build config 00:02:17.693 common/ionic: not in enabled drivers build config 00:02:17.693 common/mvep: not in enabled drivers build config 00:02:17.693 common/octeontx: not in enabled drivers build config 00:02:17.693 bus/auxiliary: not in enabled drivers build config 00:02:17.693 bus/cdx: not in enabled drivers build config 00:02:17.693 bus/dpaa: not in enabled drivers build config 00:02:17.693 bus/fslmc: not in enabled drivers build config 00:02:17.693 bus/ifpga: not in enabled drivers build config 00:02:17.693 bus/platform: not in enabled drivers build config 00:02:17.693 bus/uacce: not in enabled drivers build config 00:02:17.693 bus/vmbus: not in enabled drivers build config 00:02:17.693 common/cnxk: not in enabled drivers build config 00:02:17.693 common/mlx5: not in enabled drivers build config 00:02:17.693 common/nfp: not in enabled drivers build config 00:02:17.693 common/nitrox: not in enabled drivers build config 00:02:17.693 common/qat: not in enabled drivers build config 00:02:17.693 common/sfc_efx: not in enabled drivers build config 00:02:17.694 mempool/bucket: not in enabled drivers build config 00:02:17.694 mempool/cnxk: not in enabled drivers build config 00:02:17.694 mempool/dpaa: not in enabled drivers build config 00:02:17.694 mempool/dpaa2: not in enabled drivers build config 00:02:17.694 mempool/octeontx: not in enabled drivers build config 00:02:17.694 mempool/stack: not in enabled drivers build config 00:02:17.694 dma/cnxk: not in enabled drivers build config 00:02:17.694 dma/dpaa: not in enabled drivers build config 00:02:17.694 dma/dpaa2: not in enabled drivers build config 00:02:17.694 dma/hisilicon: not in enabled drivers build config 00:02:17.694 dma/idxd: not in enabled drivers build config 00:02:17.694 dma/ioat: not in enabled drivers build config 00:02:17.694 dma/odm: not in enabled drivers build config 00:02:17.694 dma/skeleton: not in enabled drivers build config 00:02:17.694 net/af_packet: not in enabled drivers build config 00:02:17.694 net/af_xdp: not in enabled drivers build config 00:02:17.694 net/ark: not in enabled drivers build config 00:02:17.694 net/atlantic: not in enabled drivers build config 00:02:17.694 net/avp: not in enabled drivers build config 00:02:17.694 net/axgbe: not in enabled drivers build config 00:02:17.694 net/bnx2x: not in enabled drivers build config 00:02:17.694 net/bnxt: not in enabled drivers build config 00:02:17.694 net/bonding: not in enabled drivers build config 00:02:17.694 net/cnxk: not in enabled drivers build config 00:02:17.694 net/cpfl: not in enabled drivers build config 00:02:17.694 net/cxgbe: not in enabled drivers build config 00:02:17.694 net/dpaa: not in enabled drivers build config 00:02:17.694 net/dpaa2: not in enabled drivers build config 00:02:17.694 net/e1000: not in enabled drivers build config 00:02:17.694 net/ena: not in enabled drivers build config 00:02:17.694 net/enetc: not in enabled drivers build config 00:02:17.694 net/enetfec: not in enabled drivers build config 00:02:17.694 net/enic: not in enabled drivers build config 00:02:17.694 net/failsafe: not in enabled drivers build config 00:02:17.694 net/fm10k: not in enabled drivers build config 00:02:17.694 net/gve: not in enabled drivers build config 00:02:17.694 net/hinic: not in enabled drivers build config 00:02:17.694 net/hns3: not in enabled drivers build config 00:02:17.694 net/iavf: not in enabled drivers build config 00:02:17.694 net/ice: not in enabled drivers build config 00:02:17.694 net/idpf: not in enabled drivers build config 00:02:17.694 net/igc: not in enabled drivers build config 00:02:17.694 net/ionic: not in enabled drivers build config 00:02:17.694 net/ipn3ke: not in enabled drivers build config 00:02:17.694 net/ixgbe: not in enabled drivers build config 00:02:17.694 net/mana: not in enabled drivers build config 00:02:17.694 net/memif: not in enabled drivers build config 00:02:17.694 net/mlx4: not in enabled drivers build config 00:02:17.694 net/mlx5: not in enabled drivers build config 00:02:17.694 net/mvneta: not in enabled drivers build config 00:02:17.694 net/mvpp2: not in enabled drivers build config 00:02:17.694 net/netvsc: not in enabled drivers build config 00:02:17.694 net/nfb: not in enabled drivers build config 00:02:17.694 net/nfp: not in enabled drivers build config 00:02:17.694 net/ngbe: not in enabled drivers build config 00:02:17.694 net/ntnic: not in enabled drivers build config 00:02:17.694 net/null: not in enabled drivers build config 00:02:17.694 net/octeontx: not in enabled drivers build config 00:02:17.694 net/octeon_ep: not in enabled drivers build config 00:02:17.694 net/pcap: not in enabled drivers build config 00:02:17.694 net/pfe: not in enabled drivers build config 00:02:17.694 net/qede: not in enabled drivers build config 00:02:17.694 net/ring: not in enabled drivers build config 00:02:17.694 net/sfc: not in enabled drivers build config 00:02:17.694 net/softnic: not in enabled drivers build config 00:02:17.694 net/tap: not in enabled drivers build config 00:02:17.694 net/thunderx: not in enabled drivers build config 00:02:17.694 net/txgbe: not in enabled drivers build config 00:02:17.694 net/vdev_netvsc: not in enabled drivers build config 00:02:17.694 net/vhost: not in enabled drivers build config 00:02:17.694 net/virtio: not in enabled drivers build config 00:02:17.694 net/vmxnet3: not in enabled drivers build config 00:02:17.694 raw/cnxk_bphy: not in enabled drivers build config 00:02:17.694 raw/cnxk_gpio: not in enabled drivers build config 00:02:17.694 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:17.694 raw/ifpga: not in enabled drivers build config 00:02:17.694 raw/ntb: not in enabled drivers build config 00:02:17.694 raw/skeleton: not in enabled drivers build config 00:02:17.694 crypto/armv8: not in enabled drivers build config 00:02:17.694 crypto/bcmfs: not in enabled drivers build config 00:02:17.694 crypto/caam_jr: not in enabled drivers build config 00:02:17.694 crypto/ccp: not in enabled drivers build config 00:02:17.694 crypto/cnxk: not in enabled drivers build config 00:02:17.694 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.694 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.694 crypto/ionic: not in enabled drivers build config 00:02:17.694 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.694 crypto/mlx5: not in enabled drivers build config 00:02:17.694 crypto/mvsam: not in enabled drivers build config 00:02:17.694 crypto/nitrox: not in enabled drivers build config 00:02:17.694 crypto/null: not in enabled drivers build config 00:02:17.694 crypto/octeontx: not in enabled drivers build config 00:02:17.694 crypto/openssl: not in enabled drivers build config 00:02:17.694 crypto/scheduler: not in enabled drivers build config 00:02:17.694 crypto/uadk: not in enabled drivers build config 00:02:17.694 crypto/virtio: not in enabled drivers build config 00:02:17.694 compress/isal: not in enabled drivers build config 00:02:17.694 compress/mlx5: not in enabled drivers build config 00:02:17.694 compress/nitrox: not in enabled drivers build config 00:02:17.694 compress/octeontx: not in enabled drivers build config 00:02:17.694 compress/uadk: not in enabled drivers build config 00:02:17.694 compress/zlib: not in enabled drivers build config 00:02:17.694 regex/mlx5: not in enabled drivers build config 00:02:17.694 regex/cn9k: not in enabled drivers build config 00:02:17.694 ml/cnxk: not in enabled drivers build config 00:02:17.694 vdpa/ifc: not in enabled drivers build config 00:02:17.694 vdpa/mlx5: not in enabled drivers build config 00:02:17.694 vdpa/nfp: not in enabled drivers build config 00:02:17.694 vdpa/sfc: not in enabled drivers build config 00:02:17.694 event/cnxk: not in enabled drivers build config 00:02:17.694 event/dlb2: not in enabled drivers build config 00:02:17.694 event/dpaa: not in enabled drivers build config 00:02:17.694 event/dpaa2: not in enabled drivers build config 00:02:17.694 event/dsw: not in enabled drivers build config 00:02:17.694 event/opdl: not in enabled drivers build config 00:02:17.694 event/skeleton: not in enabled drivers build config 00:02:17.694 event/sw: not in enabled drivers build config 00:02:17.694 event/octeontx: not in enabled drivers build config 00:02:17.694 baseband/acc: not in enabled drivers build config 00:02:17.694 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:17.694 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:17.694 baseband/la12xx: not in enabled drivers build config 00:02:17.694 baseband/null: not in enabled drivers build config 00:02:17.694 baseband/turbo_sw: not in enabled drivers build config 00:02:17.694 gpu/cuda: not in enabled drivers build config 00:02:17.694 00:02:17.694 00:02:17.694 Build targets in project: 219 00:02:17.694 00:02:17.694 DPDK 24.11.0-rc0 00:02:17.694 00:02:17.694 User defined options 00:02:17.694 libdir : lib 00:02:17.694 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:17.694 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:17.694 c_link_args : 00:02:17.694 enable_docs : false 00:02:17.694 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:17.694 enable_kmods : false 00:02:17.694 machine : native 00:02:17.694 tests : false 00:02:17.694 00:02:17.694 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.694 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:17.694 10:22:52 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:17.694 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:17.694 [1/718] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.956 [2/718] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.956 [3/718] Linking static target lib/librte_kvargs.a 00:02:17.956 [4/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.956 [5/718] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.956 [6/718] Linking static target lib/librte_log.a 00:02:17.956 [7/718] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:17.956 [8/718] Linking static target lib/librte_argparse.a 00:02:17.956 [9/718] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.956 [10/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:18.216 [11/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:18.216 [12/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:18.216 [13/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:18.216 [14/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:18.216 [15/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:18.216 [16/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:18.216 [17/718] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.216 [18/718] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.216 [19/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:18.216 [20/718] Linking target lib/librte_log.so.25.0 00:02:18.475 [21/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:18.475 [22/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:18.475 [23/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.475 [24/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.475 [25/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:18.736 [26/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:18.736 [27/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:18.736 [28/718] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:18.736 [29/718] Linking target lib/librte_kvargs.so.25.0 00:02:18.736 [30/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.736 [31/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.736 [32/718] Linking static target lib/librte_telemetry.a 00:02:18.736 [33/718] Linking target lib/librte_argparse.so.25.0 00:02:18.736 [34/718] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:18.995 [35/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:18.995 [36/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:18.995 [37/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:18.995 [38/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.995 [39/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.995 [40/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:18.995 [41/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:18.995 [42/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:18.995 [43/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:18.995 [44/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.253 [45/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.253 [46/718] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.253 [47/718] Linking target lib/librte_telemetry.so.25.0 00:02:19.253 [48/718] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:19.253 [49/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.253 [50/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:19.511 [51/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.511 [52/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:19.511 [53/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:19.511 [54/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:19.511 [55/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:19.511 [56/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:19.770 [57/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:19.770 [58/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.770 [59/718] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:19.770 [60/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.770 [61/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:19.770 [62/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:19.770 [63/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.770 [64/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:19.770 [65/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:19.770 [66/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:19.770 [67/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:19.770 [68/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.028 [69/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.028 [70/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.028 [71/718] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.028 [72/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:20.286 [73/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.286 [74/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:20.286 [75/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:20.286 [76/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:20.286 [77/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:20.286 [78/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:20.286 [79/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:20.286 [80/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:20.286 [81/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:20.545 [82/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:20.545 [83/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:20.545 [84/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:20.545 [85/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:20.545 [86/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:20.545 [87/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:20.545 [88/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:20.804 [89/718] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:20.804 [90/718] Linking static target lib/librte_ring.a 00:02:20.804 [91/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:20.804 [92/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:20.804 [93/718] Linking static target lib/librte_eal.a 00:02:20.804 [94/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:20.804 [95/718] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.804 [96/718] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:20.804 [97/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:20.804 [98/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:21.062 [99/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.062 [100/718] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:21.062 [101/718] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:21.062 [102/718] Linking static target lib/librte_rcu.a 00:02:21.062 [103/718] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:21.062 [104/718] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:21.321 [105/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:21.321 [106/718] Linking static target lib/librte_mempool.a 00:02:21.321 [107/718] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:21.321 [108/718] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.321 [109/718] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.321 [110/718] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:21.321 [111/718] Linking static target lib/librte_meter.a 00:02:21.321 [112/718] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:21.321 [113/718] Linking static target lib/librte_net.a 00:02:21.321 [114/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:21.321 [115/718] Linking static target lib/librte_mbuf.a 00:02:21.580 [116/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:21.580 [117/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:21.580 [118/718] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.580 [119/718] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.580 [120/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:21.580 [121/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:21.839 [122/718] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.839 [123/718] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.839 [124/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.097 [125/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:22.097 [126/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:22.355 [127/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.355 [128/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:22.355 [129/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.355 [130/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:22.355 [131/718] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:22.355 [132/718] Linking static target lib/librte_pci.a 00:02:22.355 [133/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.355 [134/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.355 [135/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:22.613 [136/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:22.613 [137/718] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.613 [138/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:22.613 [139/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:22.613 [140/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:22.613 [141/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:22.613 [142/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:22.613 [143/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.613 [144/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:22.613 [145/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:22.613 [146/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.870 [147/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.870 [148/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.870 [149/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.870 [150/718] Linking static target lib/librte_cmdline.a 00:02:22.870 [151/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:22.870 [152/718] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:22.870 [153/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:22.870 [154/718] Linking static target lib/librte_metrics.a 00:02:23.127 [155/718] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:23.127 [156/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:23.127 [157/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:23.127 [158/718] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.386 [159/718] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.386 [160/718] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:23.386 [161/718] Linking static target lib/librte_timer.a 00:02:23.644 [162/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:23.644 [163/718] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.644 [164/718] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:23.901 [165/718] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:23.901 [166/718] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:23.901 [167/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:24.160 [168/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:24.160 [169/718] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:24.160 [170/718] Linking static target lib/librte_bitratestats.a 00:02:24.418 [171/718] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.418 [172/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:24.418 [173/718] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:24.418 [174/718] Linking static target lib/librte_bbdev.a 00:02:24.676 [175/718] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:24.676 [176/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:24.676 [177/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.676 [178/718] Linking static target lib/librte_ethdev.a 00:02:24.676 [179/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:24.936 [180/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:24.936 [181/718] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.936 [182/718] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.936 [183/718] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.936 [184/718] Linking static target lib/librte_hash.a 00:02:24.936 [185/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:24.936 [186/718] Linking target lib/librte_eal.so.25.0 00:02:24.936 [187/718] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:24.936 [188/718] Linking static target lib/acl/libavx2_tmp.a 00:02:25.195 [189/718] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:02:25.195 [190/718] Linking target lib/librte_ring.so.25.0 00:02:25.195 [191/718] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:02:25.195 [192/718] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:25.195 [193/718] Linking target lib/librte_rcu.so.25.0 00:02:25.195 [194/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:25.195 [195/718] Linking target lib/librte_mempool.so.25.0 00:02:25.453 [196/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:25.453 [197/718] Linking target lib/librte_meter.so.25.0 00:02:25.453 [198/718] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.453 [199/718] Linking target lib/librte_timer.so.25.0 00:02:25.453 [200/718] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:02:25.453 [201/718] Linking static target lib/librte_cfgfile.a 00:02:25.453 [202/718] Linking target lib/librte_pci.so.25.0 00:02:25.453 [203/718] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:02:25.453 [204/718] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:02:25.453 [205/718] Linking target lib/librte_mbuf.so.25.0 00:02:25.453 [206/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:25.453 [207/718] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:02:25.453 [208/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:25.453 [209/718] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:02:25.453 [210/718] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:02:25.711 [211/718] Linking target lib/librte_net.so.25.0 00:02:25.712 [212/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:25.712 [213/718] Linking static target lib/librte_acl.a 00:02:25.712 [214/718] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.712 [215/718] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:02:25.712 [216/718] Linking target lib/librte_bbdev.so.25.0 00:02:25.712 [217/718] Linking target lib/librte_cmdline.so.25.0 00:02:25.712 [218/718] Linking target lib/librte_hash.so.25.0 00:02:25.712 [219/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:25.712 [220/718] Linking target lib/librte_cfgfile.so.25.0 00:02:25.712 [221/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:25.712 [222/718] Linking static target lib/librte_bpf.a 00:02:25.712 [223/718] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:02:25.970 [224/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:25.970 [225/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.970 [226/718] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.970 [227/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:25.970 [228/718] Linking target lib/librte_acl.so.25.0 00:02:25.970 [229/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:25.970 [230/718] Linking static target lib/librte_compressdev.a 00:02:25.970 [231/718] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:02:25.970 [232/718] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.228 [233/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:26.228 [234/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:26.228 [235/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:26.487 [236/718] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.487 [237/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:26.487 [238/718] Linking target lib/librte_compressdev.so.25.0 00:02:26.487 [239/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:26.487 [240/718] Linking static target lib/librte_distributor.a 00:02:26.487 [241/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:26.745 [242/718] Linking static target lib/librte_dmadev.a 00:02:26.745 [243/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:26.745 [244/718] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.745 [245/718] Linking target lib/librte_distributor.so.25.0 00:02:27.004 [246/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:27.004 [247/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:27.004 [248/718] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.004 [249/718] Linking target lib/librte_dmadev.so.25.0 00:02:27.004 [250/718] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:27.004 [251/718] Linking static target lib/librte_efd.a 00:02:27.004 [252/718] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:02:27.262 [253/718] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.262 [254/718] Linking target lib/librte_efd.so.25.0 00:02:27.262 [255/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:27.262 [256/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:27.262 [257/718] Linking static target lib/librte_cryptodev.a 00:02:27.519 [258/718] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:27.520 [259/718] Linking static target lib/librte_dispatcher.a 00:02:27.520 [260/718] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:27.520 [261/718] Linking static target lib/librte_gpudev.a 00:02:27.520 [262/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:27.520 [263/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:27.520 [264/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:27.778 [265/718] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:27.778 [266/718] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.036 [267/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:28.036 [268/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:28.036 [269/718] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.036 [270/718] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.036 [271/718] Linking target lib/librte_cryptodev.so.25.0 00:02:28.036 [272/718] Linking target lib/librte_gpudev.so.25.0 00:02:28.294 [273/718] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:28.294 [274/718] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:02:28.294 [275/718] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:28.294 [276/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:28.294 [277/718] Linking static target lib/librte_gro.a 00:02:28.294 [278/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:28.294 [279/718] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:28.553 [280/718] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:28.553 [281/718] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.553 [282/718] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:28.553 [283/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:28.553 [284/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:28.553 [285/718] Linking static target lib/librte_gso.a 00:02:28.553 [286/718] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.553 [287/718] Linking target lib/librte_ethdev.so.25.0 00:02:28.553 [288/718] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.812 [289/718] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:02:28.812 [290/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:28.812 [291/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:28.812 [292/718] Linking target lib/librte_metrics.so.25.0 00:02:28.812 [293/718] Linking target lib/librte_gro.so.25.0 00:02:28.812 [294/718] Linking target lib/librte_bpf.so.25.0 00:02:28.812 [295/718] Linking target lib/librte_gso.so.25.0 00:02:28.812 [296/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:28.812 [297/718] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:02:28.812 [298/718] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:28.812 [299/718] Linking static target lib/librte_jobstats.a 00:02:28.812 [300/718] Linking target lib/librte_bitratestats.so.25.0 00:02:28.812 [301/718] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:02:28.812 [302/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:28.812 [303/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:28.812 [304/718] Linking static target lib/librte_eventdev.a 00:02:29.070 [305/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:29.070 [306/718] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:29.070 [307/718] Linking static target lib/librte_latencystats.a 00:02:29.070 [308/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:29.070 [309/718] Linking static target lib/librte_ip_frag.a 00:02:29.070 [310/718] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.070 [311/718] Linking target lib/librte_jobstats.so.25.0 00:02:29.070 [312/718] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.327 [313/718] Linking target lib/librte_latencystats.so.25.0 00:02:29.327 [314/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:29.327 [315/718] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.327 [316/718] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:29.327 [317/718] Linking target lib/librte_ip_frag.so.25.0 00:02:29.327 [318/718] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:29.327 [319/718] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:02:29.327 [320/718] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:29.585 [321/718] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:29.585 [322/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:29.585 [323/718] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:29.585 [324/718] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:29.842 [325/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:29.842 [326/718] Linking static target lib/librte_lpm.a 00:02:29.842 [327/718] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:29.842 [328/718] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:29.842 [329/718] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:29.842 [330/718] Linking static target lib/librte_pcapng.a 00:02:29.842 [331/718] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.100 [332/718] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:30.100 [333/718] Linking target lib/librte_lpm.so.25.0 00:02:30.100 [334/718] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.100 [335/718] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.100 [336/718] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.100 [337/718] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:30.100 [338/718] Linking target lib/librte_pcapng.so.25.0 00:02:30.100 [339/718] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:02:30.100 [340/718] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.100 [341/718] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.100 [342/718] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:02:30.100 [343/718] Linking static target lib/librte_power.a 00:02:30.357 [344/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:30.357 [345/718] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:30.357 [346/718] Linking static target lib/librte_rawdev.a 00:02:30.357 [347/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:30.357 [348/718] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:30.357 [349/718] Linking static target lib/librte_regexdev.a 00:02:30.614 [350/718] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.615 [351/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:30.615 [352/718] Linking target lib/librte_eventdev.so.25.0 00:02:30.615 [353/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:30.615 [354/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:30.615 [355/718] Linking static target lib/librte_mldev.a 00:02:30.615 [356/718] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.615 [357/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:30.615 [358/718] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:02:30.615 [359/718] Linking static target lib/librte_member.a 00:02:30.615 [360/718] Linking target lib/librte_dispatcher.so.25.0 00:02:30.615 [361/718] Linking target lib/librte_power.so.25.0 00:02:30.873 [362/718] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.873 [363/718] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:30.873 [364/718] Linking target lib/librte_rawdev.so.25.0 00:02:30.873 [365/718] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.873 [366/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:30.873 [367/718] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:30.873 [368/718] Linking target lib/librte_member.so.25.0 00:02:30.873 [369/718] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.131 [370/718] Linking target lib/librte_regexdev.so.25.0 00:02:31.131 [371/718] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.131 [372/718] Linking static target lib/librte_reorder.a 00:02:31.131 [373/718] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:31.131 [374/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:31.131 [375/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:31.131 [376/718] Linking static target lib/librte_rib.a 00:02:31.131 [377/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:31.131 [378/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:31.131 [379/718] Linking static target lib/librte_stack.a 00:02:31.433 [380/718] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.433 [381/718] Linking target lib/librte_reorder.so.25.0 00:02:31.433 [382/718] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.433 [383/718] Linking static target lib/librte_security.a 00:02:31.433 [384/718] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:31.433 [385/718] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:02:31.433 [386/718] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:31.433 [387/718] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.433 [388/718] Linking target lib/librte_stack.so.25.0 00:02:31.699 [389/718] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.699 [390/718] Linking target lib/librte_rib.so.25.0 00:02:31.699 [391/718] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.699 [392/718] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.699 [393/718] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.699 [394/718] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.699 [395/718] Linking target lib/librte_security.so.25.0 00:02:31.699 [396/718] Linking target lib/librte_mldev.so.25.0 00:02:31.699 [397/718] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:02:31.699 [398/718] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:31.699 [399/718] Linking static target lib/librte_sched.a 00:02:31.699 [400/718] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:02:31.956 [401/718] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.956 [402/718] Linking target lib/librte_sched.so.25.0 00:02:31.956 [403/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.214 [404/718] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:02:32.214 [405/718] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:32.214 [406/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:32.471 [407/718] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:32.471 [408/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:32.471 [409/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:32.729 [410/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:32.729 [411/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:32.729 [412/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:32.729 [413/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:32.729 [414/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:32.989 [415/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:32.989 [416/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:32.989 [417/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:32.989 [418/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:32.989 [419/718] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:32.989 [420/718] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:33.246 [421/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:33.246 [422/718] Linking static target lib/librte_ipsec.a 00:02:33.246 [423/718] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:33.246 [424/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:33.246 [425/718] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.503 [426/718] Linking target lib/librte_ipsec.so.25.0 00:02:33.503 [427/718] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:02:33.503 [428/718] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:33.503 [429/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:33.503 [430/718] Linking static target lib/librte_fib.a 00:02:33.503 [431/718] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:33.503 [432/718] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:33.761 [433/718] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.761 [434/718] Linking target lib/librte_fib.so.25.0 00:02:33.761 [435/718] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:33.761 [436/718] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:33.761 [437/718] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:34.019 [438/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:34.019 [439/718] Linking static target lib/librte_pdcp.a 00:02:34.276 [440/718] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:34.276 [441/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:34.276 [442/718] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:34.276 [443/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:34.276 [444/718] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:34.276 [445/718] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.276 [446/718] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:34.276 [447/718] Linking target lib/librte_pdcp.so.25.0 00:02:34.533 [448/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:34.533 [449/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:34.791 [450/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:34.791 [451/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:34.791 [452/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:34.791 [453/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:34.791 [454/718] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:35.048 [455/718] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:35.048 [456/718] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:35.048 [457/718] Linking static target lib/librte_pdump.a 00:02:35.048 [458/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:35.048 [459/718] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:35.048 [460/718] Linking static target lib/librte_port.a 00:02:35.048 [461/718] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.305 [462/718] Linking target lib/librte_pdump.so.25.0 00:02:35.305 [463/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:35.305 [464/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:35.305 [465/718] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:35.305 [466/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:35.305 [467/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:35.305 [468/718] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.561 [469/718] Linking target lib/librte_port.so.25.0 00:02:35.561 [470/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:35.561 [471/718] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:35.561 [472/718] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:02:35.561 [473/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:35.561 [474/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:35.561 [475/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:35.818 [476/718] Linking static target lib/librte_table.a 00:02:36.076 [477/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:36.076 [478/718] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:36.076 [479/718] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.076 [480/718] Linking target lib/librte_table.so.25.0 00:02:36.076 [481/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:36.076 [482/718] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:02:36.333 [483/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:36.333 [484/718] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:36.333 [485/718] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:36.333 [486/718] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:36.632 [487/718] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:36.632 [488/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:36.632 [489/718] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:36.632 [490/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:36.910 [491/718] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:36.910 [492/718] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:36.910 [493/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:36.910 [494/718] Linking static target lib/librte_graph.a 00:02:36.910 [495/718] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:36.910 [496/718] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:37.167 [497/718] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:37.167 [498/718] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:37.425 [499/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:37.425 [500/718] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.425 [501/718] Linking target lib/librte_graph.so.25.0 00:02:37.425 [502/718] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:02:37.682 [503/718] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:37.682 [504/718] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:37.682 [505/718] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:37.682 [506/718] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:37.682 [507/718] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:37.682 [508/718] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:37.682 [509/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:37.939 [510/718] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:37.939 [511/718] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:37.939 [512/718] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:37.939 [513/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:38.197 [514/718] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:38.197 [515/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:38.197 [516/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:38.197 [517/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:38.197 [518/718] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:38.197 [519/718] Linking static target lib/librte_node.a 00:02:38.197 [520/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:38.455 [521/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:38.455 [522/718] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:38.455 [523/718] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.455 [524/718] Linking target lib/librte_node.so.25.0 00:02:38.455 [525/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:38.455 [526/718] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:38.455 [527/718] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:38.455 [528/718] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.455 [529/718] Linking static target drivers/librte_bus_pci.a 00:02:38.455 [530/718] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:38.455 [531/718] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.455 [532/718] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.455 [533/718] Linking static target drivers/librte_bus_vdev.a 00:02:38.712 [534/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:38.712 [535/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:38.712 [536/718] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.712 [537/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:38.712 [538/718] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.712 [539/718] Linking target drivers/librte_bus_vdev.so.25.0 00:02:38.712 [540/718] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.712 [541/718] Linking target drivers/librte_bus_pci.so.25.0 00:02:38.970 [542/718] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:38.970 [543/718] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:02:38.970 [544/718] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:38.970 [545/718] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:02:38.970 [546/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:38.970 [547/718] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:38.970 [548/718] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.970 [549/718] Linking static target drivers/librte_mempool_ring.a 00:02:38.970 [550/718] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.970 [551/718] Linking target drivers/librte_mempool_ring.so.25.0 00:02:39.229 [552/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:39.487 [553/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:39.487 [554/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:39.487 [555/718] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:39.744 [556/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:40.002 [557/718] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:40.002 [558/718] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:40.002 [559/718] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:40.002 [560/718] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:40.260 [561/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:40.518 [562/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:40.518 [563/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:40.518 [564/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:40.518 [565/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:40.518 [566/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:40.777 [567/718] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:40.777 [568/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:41.035 [569/718] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:41.035 [570/718] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:41.035 [571/718] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:41.295 [572/718] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:41.295 [573/718] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:41.295 [574/718] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:41.558 [575/718] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:41.558 [576/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:41.558 [577/718] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:41.558 [578/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:41.558 [579/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:41.816 [580/718] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:41.816 [581/718] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:41.816 [582/718] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:41.816 [583/718] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:41.816 [584/718] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:41.816 [585/718] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:41.816 [586/718] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:41.816 [587/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:42.074 [588/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:42.074 [589/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:42.074 [590/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:42.075 [591/718] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:42.332 [592/718] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:42.332 [593/718] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:42.332 [594/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:42.332 [595/718] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:42.332 [596/718] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:42.332 [597/718] Linking static target drivers/librte_net_i40e.a 00:02:42.332 [598/718] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:42.590 [599/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:42.590 [600/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:42.848 [601/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:42.848 [602/718] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.848 [603/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:42.848 [604/718] Linking target drivers/librte_net_i40e.so.25.0 00:02:42.848 [605/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:42.848 [606/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:43.106 [607/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:43.106 [608/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:43.106 [609/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:43.364 [610/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:43.364 [611/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:43.364 [612/718] Linking static target lib/librte_vhost.a 00:02:43.364 [613/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:43.364 [614/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:43.364 [615/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:43.364 [616/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:43.364 [617/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:43.622 [618/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:43.622 [619/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:43.622 [620/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:43.879 [621/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:43.879 [622/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:43.879 [623/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:43.879 [624/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:44.146 [625/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:44.147 [626/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:44.147 [627/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:44.147 [628/718] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.147 [629/718] Linking target lib/librte_vhost.so.25.0 00:02:44.423 [630/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:44.681 [631/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:44.681 [632/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:44.681 [633/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:44.681 [634/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:44.939 [635/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:44.939 [636/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:44.939 [637/718] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:44.939 [638/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:44.939 [639/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:44.939 [640/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:44.939 [641/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:44.939 [642/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:45.196 [643/718] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:45.196 [644/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:45.196 [645/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:45.196 [646/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:45.454 [647/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:45.454 [648/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:45.454 [649/718] Linking static target lib/librte_pipeline.a 00:02:45.454 [650/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:45.454 [651/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:45.454 [652/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:45.454 [653/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:45.711 [654/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:45.711 [655/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:45.711 [656/718] Linking target app/dpdk-dumpcap 00:02:45.711 [657/718] Linking target app/dpdk-graph 00:02:45.711 [658/718] Linking target app/dpdk-pdump 00:02:45.711 [659/718] Linking target app/dpdk-proc-info 00:02:45.971 [660/718] Linking target app/dpdk-test-acl 00:02:45.971 [661/718] Linking target app/dpdk-test-cmdline 00:02:45.971 [662/718] Linking target app/dpdk-test-compress-perf 00:02:45.971 [663/718] Linking target app/dpdk-test-crypto-perf 00:02:46.229 [664/718] Linking target app/dpdk-test-fib 00:02:46.229 [665/718] Linking target app/dpdk-test-dma-perf 00:02:46.229 [666/718] Linking target app/dpdk-test-flow-perf 00:02:46.229 [667/718] Linking target app/dpdk-test-gpudev 00:02:46.229 [668/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:46.229 [669/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:46.229 [670/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:46.487 [671/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:46.487 [672/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:46.487 [673/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:46.487 [674/718] Linking target app/dpdk-test-eventdev 00:02:46.487 [675/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:46.487 [676/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:46.487 [677/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:46.746 [678/718] Linking target app/dpdk-test-mldev 00:02:46.746 [679/718] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:46.746 [680/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:46.746 [681/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:47.004 [682/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:47.004 [683/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:47.004 [684/718] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.004 [685/718] Linking target app/dpdk-test-pipeline 00:02:47.004 [686/718] Linking target lib/librte_pipeline.so.25.0 00:02:47.261 [687/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:47.261 [688/718] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:47.518 [689/718] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:47.518 [690/718] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:47.518 [691/718] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:47.775 [692/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:47.775 [693/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:47.775 [694/718] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:47.775 [695/718] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:48.032 [696/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:48.032 [697/718] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:48.032 [698/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:48.032 [699/718] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:48.032 [700/718] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:48.290 [701/718] Linking target app/dpdk-test-bbdev 00:02:48.290 [702/718] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:48.548 [703/718] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:48.548 [704/718] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:48.548 [705/718] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:48.548 [706/718] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:48.548 [707/718] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:48.806 [708/718] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:48.806 [709/718] Linking target app/dpdk-test-sad 00:02:48.806 [710/718] Linking target app/dpdk-test-regex 00:02:48.806 [711/718] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:48.806 [712/718] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:49.063 [713/718] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:49.063 [714/718] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:49.063 [715/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:49.063 [716/718] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:49.323 [717/718] Linking target app/dpdk-testpmd 00:02:49.582 [718/718] Linking target app/dpdk-test-security-perf 00:02:49.582 10:23:24 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:49.582 10:23:24 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:49.582 10:23:24 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:49.582 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:49.582 [0/1] Installing files. 00:02:49.843 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:49.843 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.843 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:49.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.846 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.847 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:49.848 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:49.848 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.848 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.849 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.110 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.111 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.111 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.111 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.111 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.111 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.112 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:50.113 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:50.113 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:02:50.113 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:50.113 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:02:50.113 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:50.113 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:02:50.113 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:02:50.113 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:02:50.113 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:50.113 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:02:50.113 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:50.113 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:02:50.113 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:50.113 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:02:50.113 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:50.113 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:02:50.113 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:50.113 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:02:50.113 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:50.113 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:02:50.113 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:50.113 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:02:50.113 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:50.113 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:02:50.114 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:50.114 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:02:50.114 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:50.114 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:02:50.114 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:50.114 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:02:50.114 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:50.114 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:02:50.114 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:50.114 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:02:50.114 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:50.114 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:02:50.114 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:50.114 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:02:50.114 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:50.114 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:02:50.114 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:50.114 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:02:50.114 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:50.114 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:02:50.114 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:50.114 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:02:50.114 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:50.114 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:02:50.114 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:50.114 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:02:50.114 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:50.114 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:02:50.114 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:50.114 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:02:50.114 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:50.114 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:02:50.114 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:50.114 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:02:50.114 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:50.114 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:02:50.114 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:50.114 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:02:50.114 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:50.114 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:02:50.114 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:50.114 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:02:50.114 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:50.114 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:02:50.114 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:50.114 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:02:50.114 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:50.114 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:02:50.114 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:50.114 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:02:50.114 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:50.114 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:02:50.114 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:02:50.114 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:02:50.114 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:02:50.114 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:02:50.114 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:02:50.114 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:02:50.114 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:02:50.114 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:02:50.114 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:02:50.114 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:02:50.114 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:02:50.114 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:02:50.114 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:50.114 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:02:50.114 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:50.114 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:02:50.114 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:50.114 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:02:50.114 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:50.114 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:02:50.114 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:50.114 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:02:50.114 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:50.114 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:02:50.114 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:50.114 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:02:50.114 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:50.114 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:02:50.114 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:50.114 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:02:50.114 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:50.114 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:02:50.114 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:50.114 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:02:50.114 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:50.114 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:02:50.114 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:50.114 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:02:50.114 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:50.114 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:02:50.114 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:50.114 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:02:50.114 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:50.114 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:02:50.114 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:50.114 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:02:50.114 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:50.114 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:02:50.114 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:50.114 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:02:50.114 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:50.114 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:02:50.114 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:02:50.114 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:02:50.114 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:02:50.114 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:02:50.114 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:02:50.114 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:02:50.114 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:02:50.114 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:02:50.115 10:23:24 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:50.115 10:23:24 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:50.115 00:02:50.115 real 0m38.863s 00:02:50.115 user 4m31.100s 00:02:50.115 sys 0m39.242s 00:02:50.115 10:23:24 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:50.115 10:23:24 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:50.115 ************************************ 00:02:50.115 END TEST build_native_dpdk 00:02:50.115 ************************************ 00:02:50.373 10:23:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:50.373 10:23:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:50.373 10:23:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:50.373 10:23:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:50.373 10:23:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:50.373 10:23:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:50.373 10:23:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:50.373 10:23:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:50.373 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:50.373 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.373 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:50.373 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:50.631 Using 'verbs' RDMA provider 00:03:01.538 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:11.501 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:11.501 Creating mk/config.mk...done. 00:03:11.501 Creating mk/cc.flags.mk...done. 00:03:11.501 Type 'make' to build. 00:03:11.501 10:23:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:11.501 10:23:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:11.501 10:23:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:11.501 10:23:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.501 ************************************ 00:03:11.501 START TEST make 00:03:11.501 ************************************ 00:03:11.501 10:23:45 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:11.501 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:11.501 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:11.501 meson setup builddir \ 00:03:11.501 -Dwith-libaio=enabled \ 00:03:11.501 -Dwith-liburing=enabled \ 00:03:11.501 -Dwith-libvfn=disabled \ 00:03:11.501 -Dwith-spdk=false && \ 00:03:11.501 meson compile -C builddir && \ 00:03:11.501 cd -) 00:03:11.501 make[1]: Nothing to be done for 'all'. 00:03:12.945 The Meson build system 00:03:12.945 Version: 1.5.0 00:03:12.945 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:12.945 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:12.945 Build type: native build 00:03:12.945 Project name: xnvme 00:03:12.945 Project version: 0.7.3 00:03:12.945 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:12.945 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:12.945 Host machine cpu family: x86_64 00:03:12.945 Host machine cpu: x86_64 00:03:12.945 Message: host_machine.system: linux 00:03:12.945 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:12.945 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:12.945 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:12.945 Run-time dependency threads found: YES 00:03:12.945 Has header "setupapi.h" : NO 00:03:12.945 Has header "linux/blkzoned.h" : YES 00:03:12.945 Has header "linux/blkzoned.h" : YES (cached) 00:03:12.945 Has header "libaio.h" : YES 00:03:12.945 Library aio found: YES 00:03:12.945 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:12.945 Run-time dependency liburing found: YES 2.2 00:03:12.945 Dependency libvfn skipped: feature with-libvfn disabled 00:03:12.945 Run-time dependency appleframeworks found: NO (tried framework) 00:03:12.945 Run-time dependency appleframeworks found: NO (tried framework) 00:03:12.945 Configuring xnvme_config.h using configuration 00:03:12.945 Configuring xnvme.spec using configuration 00:03:12.945 Run-time dependency bash-completion found: YES 2.11 00:03:12.945 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:12.945 Program cp found: YES (/usr/bin/cp) 00:03:12.945 Has header "winsock2.h" : NO 00:03:12.945 Has header "dbghelp.h" : NO 00:03:12.945 Library rpcrt4 found: NO 00:03:12.945 Library rt found: YES 00:03:12.945 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:12.945 Found CMake: /usr/bin/cmake (3.27.7) 00:03:12.945 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:12.945 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:12.945 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:12.945 Build targets in project: 32 00:03:12.945 00:03:12.945 xnvme 0.7.3 00:03:12.945 00:03:12.945 User defined options 00:03:12.945 with-libaio : enabled 00:03:12.945 with-liburing: enabled 00:03:12.945 with-libvfn : disabled 00:03:12.945 with-spdk : false 00:03:12.945 00:03:12.945 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:13.204 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:13.204 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:13.463 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:13.463 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:13.463 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:13.463 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:13.463 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:13.463 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:13.463 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:13.463 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:13.463 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:13.463 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:13.463 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:13.463 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:13.463 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:13.463 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:13.463 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:13.463 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:13.463 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:13.463 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:13.463 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:13.463 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:13.463 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:13.722 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:13.722 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:13.722 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:13.722 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:13.722 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:13.722 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:13.722 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:13.722 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:13.722 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:13.722 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:13.722 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:13.722 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:13.722 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:13.722 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:13.722 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:13.722 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:13.722 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:13.722 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:13.722 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:13.722 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:13.722 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:13.722 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:13.722 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:13.722 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:13.722 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:13.722 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:13.722 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:13.722 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:13.722 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:13.722 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:13.722 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:13.722 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:13.722 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:13.981 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:13.981 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:13.981 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:13.981 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:13.981 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:13.981 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:13.981 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:13.981 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:13.981 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:13.981 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:13.981 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:13.981 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:13.981 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:13.981 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:13.981 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:13.981 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:13.981 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:13.981 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:13.981 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:13.981 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:13.981 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:14.240 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:14.240 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:14.240 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:14.240 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:14.240 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:14.240 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:14.240 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:14.240 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:14.240 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:14.240 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:14.240 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:14.240 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:14.240 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:14.240 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:14.240 [91/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:14.240 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:14.240 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:14.240 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:14.240 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:14.240 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:14.240 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:14.240 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:14.240 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:14.499 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:14.499 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:14.499 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:14.499 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:14.499 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:14.499 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:14.499 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:14.499 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:14.499 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:14.499 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:14.499 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:14.499 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:14.499 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:14.499 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:14.499 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:14.499 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:14.499 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:14.499 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:14.499 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:14.499 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:14.499 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:14.499 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:14.499 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:14.499 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:14.499 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:14.499 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:14.499 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:14.499 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:14.499 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:14.499 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:14.499 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:14.499 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:14.499 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:14.758 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:14.758 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:14.758 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:14.758 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:14.758 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:14.758 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:14.758 [139/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:14.758 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:14.758 [141/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:14.758 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:14.758 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:14.758 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:14.758 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:14.758 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:14.758 [147/203] Linking target lib/libxnvme.so 00:03:14.758 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:14.758 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:14.758 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:14.758 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:14.758 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:15.016 [153/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:15.016 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:15.016 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:15.016 [156/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:15.016 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:15.016 [158/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:15.016 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:15.016 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:15.016 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:15.016 [162/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:15.016 [163/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:15.016 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:15.016 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:15.016 [166/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:15.016 [167/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:15.016 [168/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:15.016 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:15.016 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:15.275 [171/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:15.275 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:15.275 [173/203] Linking static target lib/libxnvme.a 00:03:15.275 [174/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:15.275 [175/203] Linking target tests/xnvme_tests_ioworker 00:03:15.275 [176/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:15.275 [177/203] Linking target tests/xnvme_tests_buf 00:03:15.275 [178/203] Linking target tests/xnvme_tests_async_intf 00:03:15.275 [179/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:15.275 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:03:15.275 [181/203] Linking target tests/xnvme_tests_cli 00:03:15.275 [182/203] Linking target tests/xnvme_tests_lblk 00:03:15.275 [183/203] Linking target tests/xnvme_tests_scc 00:03:15.275 [184/203] Linking target tests/xnvme_tests_enum 00:03:15.275 [185/203] Linking target tests/xnvme_tests_znd_append 00:03:15.275 [186/203] Linking target tests/xnvme_tests_map 00:03:15.275 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:15.275 [188/203] Linking target tests/xnvme_tests_znd_state 00:03:15.275 [189/203] Linking target tools/xdd 00:03:15.275 [190/203] Linking target tests/xnvme_tests_kvs 00:03:15.275 [191/203] Linking target tools/xnvme_file 00:03:15.275 [192/203] Linking target examples/xnvme_dev 00:03:15.275 [193/203] Linking target tools/zoned 00:03:15.275 [194/203] Linking target tools/lblk 00:03:15.275 [195/203] Linking target tools/kvs 00:03:15.275 [196/203] Linking target examples/xnvme_hello 00:03:15.275 [197/203] Linking target examples/xnvme_io_async 00:03:15.275 [198/203] Linking target tools/xnvme 00:03:15.275 [199/203] Linking target examples/xnvme_enum 00:03:15.275 [200/203] Linking target examples/xnvme_single_async 00:03:15.275 [201/203] Linking target examples/zoned_io_async 00:03:15.275 [202/203] Linking target examples/xnvme_single_sync 00:03:15.275 [203/203] Linking target examples/zoned_io_sync 00:03:15.275 INFO: autodetecting backend as ninja 00:03:15.275 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:15.275 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:47.340 CC lib/ut_mock/mock.o 00:03:47.340 CC lib/log/log_flags.o 00:03:47.340 CC lib/log/log.o 00:03:47.340 CC lib/ut/ut.o 00:03:47.340 CC lib/log/log_deprecated.o 00:03:47.340 LIB libspdk_log.a 00:03:47.340 LIB libspdk_ut.a 00:03:47.340 LIB libspdk_ut_mock.a 00:03:47.340 SO libspdk_log.so.7.0 00:03:47.340 SO libspdk_ut.so.2.0 00:03:47.340 SO libspdk_ut_mock.so.6.0 00:03:47.340 SYMLINK libspdk_log.so 00:03:47.340 SYMLINK libspdk_ut.so 00:03:47.340 SYMLINK libspdk_ut_mock.so 00:03:47.340 CXX lib/trace_parser/trace.o 00:03:47.340 CC lib/util/base64.o 00:03:47.340 CC lib/util/bit_array.o 00:03:47.340 CC lib/util/cpuset.o 00:03:47.340 CC lib/util/crc16.o 00:03:47.340 CC lib/util/crc32.o 00:03:47.340 CC lib/ioat/ioat.o 00:03:47.340 CC lib/util/crc32c.o 00:03:47.340 CC lib/dma/dma.o 00:03:47.340 CC lib/vfio_user/host/vfio_user_pci.o 00:03:47.340 CC lib/util/crc32_ieee.o 00:03:47.340 CC lib/util/crc64.o 00:03:47.340 CC lib/util/dif.o 00:03:47.340 CC lib/util/fd.o 00:03:47.340 CC lib/util/fd_group.o 00:03:47.340 LIB libspdk_dma.a 00:03:47.340 CC lib/util/file.o 00:03:47.340 SO libspdk_dma.so.5.0 00:03:47.340 CC lib/util/hexlify.o 00:03:47.340 CC lib/util/iov.o 00:03:47.340 SYMLINK libspdk_dma.so 00:03:47.340 CC lib/util/math.o 00:03:47.340 CC lib/util/net.o 00:03:47.340 LIB libspdk_ioat.a 00:03:47.340 CC lib/vfio_user/host/vfio_user.o 00:03:47.340 SO libspdk_ioat.so.7.0 00:03:47.340 CC lib/util/pipe.o 00:03:47.340 SYMLINK libspdk_ioat.so 00:03:47.340 CC lib/util/strerror_tls.o 00:03:47.340 CC lib/util/string.o 00:03:47.340 CC lib/util/uuid.o 00:03:47.340 CC lib/util/xor.o 00:03:47.340 CC lib/util/zipf.o 00:03:47.340 CC lib/util/md5.o 00:03:47.340 LIB libspdk_vfio_user.a 00:03:47.340 SO libspdk_vfio_user.so.5.0 00:03:47.340 SYMLINK libspdk_vfio_user.so 00:03:47.340 LIB libspdk_util.a 00:03:47.340 SO libspdk_util.so.10.0 00:03:47.340 SYMLINK libspdk_util.so 00:03:47.340 LIB libspdk_trace_parser.a 00:03:47.340 SO libspdk_trace_parser.so.6.0 00:03:47.340 CC lib/conf/conf.o 00:03:47.340 CC lib/env_dpdk/env.o 00:03:47.340 CC lib/env_dpdk/memory.o 00:03:47.340 CC lib/vmd/vmd.o 00:03:47.340 CC lib/idxd/idxd.o 00:03:47.340 CC lib/idxd/idxd_user.o 00:03:47.340 CC lib/json/json_parse.o 00:03:47.341 CC lib/rdma_provider/common.o 00:03:47.341 CC lib/rdma_utils/rdma_utils.o 00:03:47.341 SYMLINK libspdk_trace_parser.so 00:03:47.341 CC lib/env_dpdk/pci.o 00:03:47.341 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:47.341 CC lib/env_dpdk/init.o 00:03:47.341 LIB libspdk_conf.a 00:03:47.341 CC lib/json/json_util.o 00:03:47.341 SO libspdk_conf.so.6.0 00:03:47.341 LIB libspdk_rdma_utils.a 00:03:47.341 SYMLINK libspdk_conf.so 00:03:47.341 CC lib/json/json_write.o 00:03:47.341 SO libspdk_rdma_utils.so.1.0 00:03:47.341 CC lib/env_dpdk/threads.o 00:03:47.341 SYMLINK libspdk_rdma_utils.so 00:03:47.341 CC lib/env_dpdk/pci_ioat.o 00:03:47.341 LIB libspdk_rdma_provider.a 00:03:47.341 SO libspdk_rdma_provider.so.6.0 00:03:47.341 SYMLINK libspdk_rdma_provider.so 00:03:47.341 CC lib/vmd/led.o 00:03:47.341 CC lib/env_dpdk/pci_virtio.o 00:03:47.341 CC lib/env_dpdk/pci_vmd.o 00:03:47.341 CC lib/env_dpdk/pci_idxd.o 00:03:47.341 CC lib/env_dpdk/pci_event.o 00:03:47.341 CC lib/env_dpdk/sigbus_handler.o 00:03:47.341 LIB libspdk_json.a 00:03:47.341 CC lib/env_dpdk/pci_dpdk.o 00:03:47.341 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:47.341 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:47.341 CC lib/idxd/idxd_kernel.o 00:03:47.341 SO libspdk_json.so.6.0 00:03:47.341 SYMLINK libspdk_json.so 00:03:47.341 LIB libspdk_vmd.a 00:03:47.341 SO libspdk_vmd.so.6.0 00:03:47.341 LIB libspdk_idxd.a 00:03:47.601 SYMLINK libspdk_vmd.so 00:03:47.601 SO libspdk_idxd.so.12.1 00:03:47.601 CC lib/jsonrpc/jsonrpc_server.o 00:03:47.601 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:47.601 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:47.601 CC lib/jsonrpc/jsonrpc_client.o 00:03:47.601 SYMLINK libspdk_idxd.so 00:03:47.862 LIB libspdk_jsonrpc.a 00:03:47.862 SO libspdk_jsonrpc.so.6.0 00:03:47.862 SYMLINK libspdk_jsonrpc.so 00:03:48.124 LIB libspdk_env_dpdk.a 00:03:48.124 CC lib/rpc/rpc.o 00:03:48.124 SO libspdk_env_dpdk.so.15.0 00:03:48.383 SYMLINK libspdk_env_dpdk.so 00:03:48.383 LIB libspdk_rpc.a 00:03:48.383 SO libspdk_rpc.so.6.0 00:03:48.383 SYMLINK libspdk_rpc.so 00:03:48.644 CC lib/trace/trace.o 00:03:48.644 CC lib/keyring/keyring_rpc.o 00:03:48.644 CC lib/keyring/keyring.o 00:03:48.644 CC lib/trace/trace_rpc.o 00:03:48.644 CC lib/trace/trace_flags.o 00:03:48.644 CC lib/notify/notify.o 00:03:48.644 CC lib/notify/notify_rpc.o 00:03:48.644 LIB libspdk_notify.a 00:03:48.644 SO libspdk_notify.so.6.0 00:03:48.904 LIB libspdk_keyring.a 00:03:48.904 SYMLINK libspdk_notify.so 00:03:48.904 LIB libspdk_trace.a 00:03:48.904 SO libspdk_keyring.so.2.0 00:03:48.904 SO libspdk_trace.so.11.0 00:03:48.904 SYMLINK libspdk_keyring.so 00:03:48.904 SYMLINK libspdk_trace.so 00:03:49.164 CC lib/sock/sock.o 00:03:49.164 CC lib/sock/sock_rpc.o 00:03:49.164 CC lib/thread/thread.o 00:03:49.164 CC lib/thread/iobuf.o 00:03:49.425 LIB libspdk_sock.a 00:03:49.425 SO libspdk_sock.so.10.0 00:03:49.685 SYMLINK libspdk_sock.so 00:03:49.944 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:49.944 CC lib/nvme/nvme_ctrlr.o 00:03:49.944 CC lib/nvme/nvme_fabric.o 00:03:49.944 CC lib/nvme/nvme_ns.o 00:03:49.944 CC lib/nvme/nvme_ns_cmd.o 00:03:49.944 CC lib/nvme/nvme_pcie.o 00:03:49.944 CC lib/nvme/nvme_pcie_common.o 00:03:49.944 CC lib/nvme/nvme_qpair.o 00:03:49.944 CC lib/nvme/nvme.o 00:03:50.510 CC lib/nvme/nvme_quirks.o 00:03:50.510 CC lib/nvme/nvme_transport.o 00:03:50.510 CC lib/nvme/nvme_discovery.o 00:03:50.510 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:50.510 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:50.510 LIB libspdk_thread.a 00:03:50.510 CC lib/nvme/nvme_tcp.o 00:03:50.770 SO libspdk_thread.so.10.1 00:03:50.770 CC lib/nvme/nvme_opal.o 00:03:50.770 SYMLINK libspdk_thread.so 00:03:50.770 CC lib/nvme/nvme_io_msg.o 00:03:50.770 CC lib/accel/accel.o 00:03:50.770 CC lib/nvme/nvme_poll_group.o 00:03:51.028 CC lib/nvme/nvme_zns.o 00:03:51.028 CC lib/nvme/nvme_stubs.o 00:03:51.028 CC lib/nvme/nvme_auth.o 00:03:51.289 CC lib/blob/blobstore.o 00:03:51.289 CC lib/blob/request.o 00:03:51.289 CC lib/blob/zeroes.o 00:03:51.289 CC lib/blob/blob_bs_dev.o 00:03:51.547 CC lib/nvme/nvme_cuse.o 00:03:51.548 CC lib/accel/accel_rpc.o 00:03:51.548 CC lib/nvme/nvme_rdma.o 00:03:51.548 CC lib/accel/accel_sw.o 00:03:51.806 CC lib/init/json_config.o 00:03:51.806 CC lib/virtio/virtio.o 00:03:51.806 CC lib/init/subsystem.o 00:03:51.806 CC lib/init/subsystem_rpc.o 00:03:52.064 CC lib/init/rpc.o 00:03:52.064 LIB libspdk_accel.a 00:03:52.064 CC lib/virtio/virtio_vhost_user.o 00:03:52.064 CC lib/virtio/virtio_vfio_user.o 00:03:52.064 SO libspdk_accel.so.16.0 00:03:52.064 CC lib/virtio/virtio_pci.o 00:03:52.064 LIB libspdk_init.a 00:03:52.064 SYMLINK libspdk_accel.so 00:03:52.064 SO libspdk_init.so.6.0 00:03:52.321 SYMLINK libspdk_init.so 00:03:52.321 CC lib/fsdev/fsdev.o 00:03:52.321 CC lib/fsdev/fsdev_io.o 00:03:52.321 CC lib/bdev/bdev.o 00:03:52.321 CC lib/fsdev/fsdev_rpc.o 00:03:52.321 CC lib/bdev/bdev_rpc.o 00:03:52.321 CC lib/bdev/bdev_zone.o 00:03:52.321 CC lib/event/app.o 00:03:52.321 CC lib/event/reactor.o 00:03:52.321 LIB libspdk_virtio.a 00:03:52.321 SO libspdk_virtio.so.7.0 00:03:52.579 CC lib/event/log_rpc.o 00:03:52.579 SYMLINK libspdk_virtio.so 00:03:52.579 CC lib/event/app_rpc.o 00:03:52.579 CC lib/bdev/part.o 00:03:52.579 CC lib/event/scheduler_static.o 00:03:52.579 CC lib/bdev/scsi_nvme.o 00:03:52.836 LIB libspdk_nvme.a 00:03:52.836 LIB libspdk_event.a 00:03:52.836 SO libspdk_event.so.14.0 00:03:52.836 LIB libspdk_fsdev.a 00:03:52.836 SYMLINK libspdk_event.so 00:03:52.836 SO libspdk_fsdev.so.1.0 00:03:52.836 SO libspdk_nvme.so.14.0 00:03:53.093 SYMLINK libspdk_fsdev.so 00:03:53.093 SYMLINK libspdk_nvme.so 00:03:53.093 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:54.023 LIB libspdk_fuse_dispatcher.a 00:03:54.023 SO libspdk_fuse_dispatcher.so.1.0 00:03:54.023 SYMLINK libspdk_fuse_dispatcher.so 00:03:54.281 LIB libspdk_blob.a 00:03:54.281 SO libspdk_blob.so.11.0 00:03:54.281 SYMLINK libspdk_blob.so 00:03:54.538 CC lib/blobfs/blobfs.o 00:03:54.538 CC lib/blobfs/tree.o 00:03:54.538 CC lib/lvol/lvol.o 00:03:55.104 LIB libspdk_bdev.a 00:03:55.104 SO libspdk_bdev.so.16.0 00:03:55.104 SYMLINK libspdk_bdev.so 00:03:55.361 CC lib/nvmf/ctrlr.o 00:03:55.361 CC lib/nvmf/ctrlr_discovery.o 00:03:55.361 CC lib/nvmf/ctrlr_bdev.o 00:03:55.361 CC lib/nvmf/subsystem.o 00:03:55.361 CC lib/ublk/ublk.o 00:03:55.361 CC lib/nbd/nbd.o 00:03:55.361 CC lib/scsi/dev.o 00:03:55.361 LIB libspdk_blobfs.a 00:03:55.361 CC lib/ftl/ftl_core.o 00:03:55.361 SO libspdk_blobfs.so.10.0 00:03:55.361 SYMLINK libspdk_blobfs.so 00:03:55.361 CC lib/ftl/ftl_init.o 00:03:55.619 LIB libspdk_lvol.a 00:03:55.619 CC lib/scsi/lun.o 00:03:55.619 SO libspdk_lvol.so.10.0 00:03:55.619 SYMLINK libspdk_lvol.so 00:03:55.619 CC lib/ftl/ftl_layout.o 00:03:55.619 CC lib/nbd/nbd_rpc.o 00:03:55.619 CC lib/ublk/ublk_rpc.o 00:03:55.619 CC lib/ftl/ftl_debug.o 00:03:55.876 CC lib/nvmf/nvmf.o 00:03:55.876 LIB libspdk_nbd.a 00:03:55.876 CC lib/nvmf/nvmf_rpc.o 00:03:55.876 SO libspdk_nbd.so.7.0 00:03:55.876 CC lib/scsi/port.o 00:03:55.876 CC lib/nvmf/transport.o 00:03:55.877 SYMLINK libspdk_nbd.so 00:03:55.877 CC lib/scsi/scsi.o 00:03:55.877 LIB libspdk_ublk.a 00:03:55.877 SO libspdk_ublk.so.3.0 00:03:55.877 CC lib/ftl/ftl_io.o 00:03:55.877 CC lib/ftl/ftl_sb.o 00:03:55.877 SYMLINK libspdk_ublk.so 00:03:55.877 CC lib/nvmf/tcp.o 00:03:55.877 CC lib/nvmf/stubs.o 00:03:56.134 CC lib/scsi/scsi_bdev.o 00:03:56.134 CC lib/ftl/ftl_l2p.o 00:03:56.134 CC lib/ftl/ftl_l2p_flat.o 00:03:56.392 CC lib/scsi/scsi_pr.o 00:03:56.392 CC lib/ftl/ftl_nv_cache.o 00:03:56.392 CC lib/nvmf/mdns_server.o 00:03:56.392 CC lib/scsi/scsi_rpc.o 00:03:56.392 CC lib/ftl/ftl_band.o 00:03:56.392 CC lib/ftl/ftl_band_ops.o 00:03:56.650 CC lib/scsi/task.o 00:03:56.650 CC lib/nvmf/rdma.o 00:03:56.650 CC lib/nvmf/auth.o 00:03:56.650 CC lib/ftl/ftl_writer.o 00:03:56.650 LIB libspdk_scsi.a 00:03:56.650 CC lib/ftl/ftl_rq.o 00:03:56.908 SO libspdk_scsi.so.9.0 00:03:56.908 CC lib/ftl/ftl_reloc.o 00:03:56.908 CC lib/ftl/ftl_l2p_cache.o 00:03:56.908 CC lib/ftl/ftl_p2l.o 00:03:56.908 SYMLINK libspdk_scsi.so 00:03:56.908 CC lib/ftl/ftl_p2l_log.o 00:03:56.908 CC lib/ftl/mngt/ftl_mngt.o 00:03:57.166 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:57.166 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:57.166 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:57.166 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:57.166 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:57.166 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:57.424 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:57.424 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:57.424 CC lib/iscsi/conn.o 00:03:57.424 CC lib/vhost/vhost.o 00:03:57.424 CC lib/vhost/vhost_rpc.o 00:03:57.424 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:57.424 CC lib/iscsi/init_grp.o 00:03:57.424 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:57.424 CC lib/iscsi/iscsi.o 00:03:57.682 CC lib/iscsi/param.o 00:03:57.682 CC lib/iscsi/portal_grp.o 00:03:57.682 CC lib/iscsi/tgt_node.o 00:03:57.682 CC lib/vhost/vhost_scsi.o 00:03:57.682 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:57.940 CC lib/iscsi/iscsi_subsystem.o 00:03:57.940 CC lib/iscsi/iscsi_rpc.o 00:03:57.940 CC lib/iscsi/task.o 00:03:57.940 CC lib/vhost/vhost_blk.o 00:03:57.940 CC lib/vhost/rte_vhost_user.o 00:03:57.940 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:58.198 CC lib/ftl/utils/ftl_conf.o 00:03:58.198 CC lib/ftl/utils/ftl_md.o 00:03:58.198 CC lib/ftl/utils/ftl_mempool.o 00:03:58.198 CC lib/ftl/utils/ftl_bitmap.o 00:03:58.198 CC lib/ftl/utils/ftl_property.o 00:03:58.198 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:58.198 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:58.198 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:58.198 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:58.456 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:58.456 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:58.456 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:58.456 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:58.456 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:58.456 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:58.456 LIB libspdk_nvmf.a 00:03:58.456 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:58.456 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:58.714 SO libspdk_nvmf.so.19.0 00:03:58.714 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:58.714 CC lib/ftl/base/ftl_base_dev.o 00:03:58.714 CC lib/ftl/base/ftl_base_bdev.o 00:03:58.714 CC lib/ftl/ftl_trace.o 00:03:58.714 LIB libspdk_iscsi.a 00:03:58.714 SO libspdk_iscsi.so.8.0 00:03:58.714 SYMLINK libspdk_nvmf.so 00:03:58.714 LIB libspdk_vhost.a 00:03:58.972 LIB libspdk_ftl.a 00:03:58.972 SYMLINK libspdk_iscsi.so 00:03:58.972 SO libspdk_vhost.so.8.0 00:03:58.972 SYMLINK libspdk_vhost.so 00:03:58.972 SO libspdk_ftl.so.9.0 00:03:59.231 SYMLINK libspdk_ftl.so 00:03:59.489 CC module/env_dpdk/env_dpdk_rpc.o 00:03:59.489 CC module/fsdev/aio/fsdev_aio.o 00:03:59.489 CC module/accel/ioat/accel_ioat.o 00:03:59.489 CC module/accel/error/accel_error.o 00:03:59.489 CC module/accel/iaa/accel_iaa.o 00:03:59.489 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:59.489 CC module/sock/posix/posix.o 00:03:59.489 CC module/accel/dsa/accel_dsa.o 00:03:59.489 CC module/keyring/file/keyring.o 00:03:59.489 CC module/blob/bdev/blob_bdev.o 00:03:59.489 LIB libspdk_env_dpdk_rpc.a 00:03:59.489 SO libspdk_env_dpdk_rpc.so.6.0 00:03:59.489 SYMLINK libspdk_env_dpdk_rpc.so 00:03:59.489 CC module/keyring/file/keyring_rpc.o 00:03:59.748 CC module/accel/iaa/accel_iaa_rpc.o 00:03:59.748 CC module/accel/error/accel_error_rpc.o 00:03:59.748 CC module/accel/ioat/accel_ioat_rpc.o 00:03:59.748 LIB libspdk_scheduler_dynamic.a 00:03:59.748 SO libspdk_scheduler_dynamic.so.4.0 00:03:59.748 LIB libspdk_keyring_file.a 00:03:59.748 CC module/accel/dsa/accel_dsa_rpc.o 00:03:59.748 SO libspdk_keyring_file.so.2.0 00:03:59.748 SYMLINK libspdk_scheduler_dynamic.so 00:03:59.748 LIB libspdk_accel_iaa.a 00:03:59.748 LIB libspdk_accel_ioat.a 00:03:59.748 LIB libspdk_accel_error.a 00:03:59.748 SYMLINK libspdk_keyring_file.so 00:03:59.748 SO libspdk_accel_iaa.so.3.0 00:03:59.748 SO libspdk_accel_ioat.so.6.0 00:03:59.748 SO libspdk_accel_error.so.2.0 00:03:59.748 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:59.748 LIB libspdk_blob_bdev.a 00:03:59.748 SYMLINK libspdk_accel_iaa.so 00:03:59.748 SYMLINK libspdk_accel_error.so 00:03:59.748 SYMLINK libspdk_accel_ioat.so 00:03:59.748 LIB libspdk_accel_dsa.a 00:03:59.748 CC module/fsdev/aio/linux_aio_mgr.o 00:03:59.748 SO libspdk_blob_bdev.so.11.0 00:03:59.748 SO libspdk_accel_dsa.so.5.0 00:03:59.748 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:00.007 SYMLINK libspdk_blob_bdev.so 00:04:00.007 SYMLINK libspdk_accel_dsa.so 00:04:00.007 CC module/keyring/linux/keyring.o 00:04:00.007 CC module/keyring/linux/keyring_rpc.o 00:04:00.007 CC module/scheduler/gscheduler/gscheduler.o 00:04:00.007 LIB libspdk_scheduler_dpdk_governor.a 00:04:00.007 LIB libspdk_keyring_linux.a 00:04:00.007 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:00.007 SO libspdk_keyring_linux.so.1.0 00:04:00.007 LIB libspdk_fsdev_aio.a 00:04:00.007 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:00.007 SO libspdk_fsdev_aio.so.1.0 00:04:00.007 CC module/blobfs/bdev/blobfs_bdev.o 00:04:00.007 SYMLINK libspdk_keyring_linux.so 00:04:00.007 LIB libspdk_scheduler_gscheduler.a 00:04:00.007 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:00.007 SO libspdk_scheduler_gscheduler.so.4.0 00:04:00.007 CC module/bdev/gpt/gpt.o 00:04:00.007 CC module/bdev/delay/vbdev_delay.o 00:04:00.007 CC module/bdev/error/vbdev_error.o 00:04:00.007 SYMLINK libspdk_fsdev_aio.so 00:04:00.007 CC module/bdev/error/vbdev_error_rpc.o 00:04:00.265 CC module/bdev/lvol/vbdev_lvol.o 00:04:00.265 SYMLINK libspdk_scheduler_gscheduler.so 00:04:00.265 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:00.266 LIB libspdk_sock_posix.a 00:04:00.266 LIB libspdk_blobfs_bdev.a 00:04:00.266 CC module/bdev/malloc/bdev_malloc.o 00:04:00.266 SO libspdk_blobfs_bdev.so.6.0 00:04:00.266 SO libspdk_sock_posix.so.6.0 00:04:00.266 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:00.266 CC module/bdev/gpt/vbdev_gpt.o 00:04:00.266 SYMLINK libspdk_blobfs_bdev.so 00:04:00.266 SYMLINK libspdk_sock_posix.so 00:04:00.266 CC module/bdev/null/bdev_null.o 00:04:00.266 LIB libspdk_bdev_error.a 00:04:00.266 SO libspdk_bdev_error.so.6.0 00:04:00.266 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:00.266 SYMLINK libspdk_bdev_error.so 00:04:00.525 CC module/bdev/nvme/bdev_nvme.o 00:04:00.525 CC module/bdev/passthru/vbdev_passthru.o 00:04:00.525 LIB libspdk_bdev_gpt.a 00:04:00.525 LIB libspdk_bdev_delay.a 00:04:00.525 SO libspdk_bdev_gpt.so.6.0 00:04:00.525 SO libspdk_bdev_delay.so.6.0 00:04:00.525 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:00.525 LIB libspdk_bdev_malloc.a 00:04:00.525 LIB libspdk_bdev_lvol.a 00:04:00.525 SYMLINK libspdk_bdev_gpt.so 00:04:00.525 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:00.525 CC module/bdev/raid/bdev_raid.o 00:04:00.525 SO libspdk_bdev_malloc.so.6.0 00:04:00.525 SO libspdk_bdev_lvol.so.6.0 00:04:00.525 SYMLINK libspdk_bdev_delay.so 00:04:00.525 CC module/bdev/nvme/nvme_rpc.o 00:04:00.525 CC module/bdev/null/bdev_null_rpc.o 00:04:00.525 SYMLINK libspdk_bdev_malloc.so 00:04:00.525 SYMLINK libspdk_bdev_lvol.so 00:04:00.525 CC module/bdev/nvme/bdev_mdns_client.o 00:04:00.525 CC module/bdev/raid/bdev_raid_rpc.o 00:04:00.525 CC module/bdev/raid/bdev_raid_sb.o 00:04:00.525 LIB libspdk_bdev_passthru.a 00:04:00.783 CC module/bdev/split/vbdev_split.o 00:04:00.783 SO libspdk_bdev_passthru.so.6.0 00:04:00.783 LIB libspdk_bdev_null.a 00:04:00.783 CC module/bdev/nvme/vbdev_opal.o 00:04:00.783 SO libspdk_bdev_null.so.6.0 00:04:00.783 SYMLINK libspdk_bdev_passthru.so 00:04:00.783 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:00.783 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:00.783 SYMLINK libspdk_bdev_null.so 00:04:00.783 CC module/bdev/raid/raid0.o 00:04:00.783 CC module/bdev/raid/raid1.o 00:04:00.783 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:00.783 CC module/bdev/split/vbdev_split_rpc.o 00:04:00.783 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:01.049 CC module/bdev/raid/concat.o 00:04:01.049 CC module/bdev/xnvme/bdev_xnvme.o 00:04:01.049 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:01.049 LIB libspdk_bdev_split.a 00:04:01.049 SO libspdk_bdev_split.so.6.0 00:04:01.049 CC module/bdev/aio/bdev_aio.o 00:04:01.049 SYMLINK libspdk_bdev_split.so 00:04:01.049 CC module/bdev/aio/bdev_aio_rpc.o 00:04:01.049 LIB libspdk_bdev_zone_block.a 00:04:01.049 CC module/bdev/ftl/bdev_ftl.o 00:04:01.049 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:01.049 SO libspdk_bdev_zone_block.so.6.0 00:04:01.337 SYMLINK libspdk_bdev_zone_block.so 00:04:01.337 LIB libspdk_bdev_xnvme.a 00:04:01.337 CC module/bdev/iscsi/bdev_iscsi.o 00:04:01.337 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:01.337 SO libspdk_bdev_xnvme.so.3.0 00:04:01.337 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:01.337 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:01.337 SYMLINK libspdk_bdev_xnvme.so 00:04:01.337 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:01.337 LIB libspdk_bdev_aio.a 00:04:01.337 SO libspdk_bdev_aio.so.6.0 00:04:01.337 LIB libspdk_bdev_ftl.a 00:04:01.337 LIB libspdk_bdev_raid.a 00:04:01.337 SYMLINK libspdk_bdev_aio.so 00:04:01.337 SO libspdk_bdev_ftl.so.6.0 00:04:01.337 SO libspdk_bdev_raid.so.6.0 00:04:01.337 SYMLINK libspdk_bdev_ftl.so 00:04:01.594 SYMLINK libspdk_bdev_raid.so 00:04:01.595 LIB libspdk_bdev_iscsi.a 00:04:01.595 SO libspdk_bdev_iscsi.so.6.0 00:04:01.595 SYMLINK libspdk_bdev_iscsi.so 00:04:01.595 LIB libspdk_bdev_virtio.a 00:04:01.595 SO libspdk_bdev_virtio.so.6.0 00:04:01.853 SYMLINK libspdk_bdev_virtio.so 00:04:02.419 LIB libspdk_bdev_nvme.a 00:04:02.419 SO libspdk_bdev_nvme.so.7.0 00:04:02.677 SYMLINK libspdk_bdev_nvme.so 00:04:02.935 CC module/event/subsystems/iobuf/iobuf.o 00:04:02.935 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:02.935 CC module/event/subsystems/vmd/vmd.o 00:04:02.935 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:02.935 CC module/event/subsystems/keyring/keyring.o 00:04:02.935 CC module/event/subsystems/fsdev/fsdev.o 00:04:02.935 CC module/event/subsystems/sock/sock.o 00:04:02.935 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:02.935 CC module/event/subsystems/scheduler/scheduler.o 00:04:02.935 LIB libspdk_event_vhost_blk.a 00:04:02.935 LIB libspdk_event_fsdev.a 00:04:02.935 LIB libspdk_event_sock.a 00:04:02.935 LIB libspdk_event_vmd.a 00:04:02.935 LIB libspdk_event_keyring.a 00:04:02.935 SO libspdk_event_vhost_blk.so.3.0 00:04:02.935 LIB libspdk_event_scheduler.a 00:04:02.935 SO libspdk_event_fsdev.so.1.0 00:04:02.935 SO libspdk_event_sock.so.5.0 00:04:02.935 SO libspdk_event_vmd.so.6.0 00:04:02.935 LIB libspdk_event_iobuf.a 00:04:02.935 SO libspdk_event_scheduler.so.4.0 00:04:02.935 SO libspdk_event_keyring.so.1.0 00:04:02.935 SYMLINK libspdk_event_vhost_blk.so 00:04:02.935 SO libspdk_event_iobuf.so.3.0 00:04:02.935 SYMLINK libspdk_event_fsdev.so 00:04:03.193 SYMLINK libspdk_event_sock.so 00:04:03.193 SYMLINK libspdk_event_scheduler.so 00:04:03.193 SYMLINK libspdk_event_keyring.so 00:04:03.193 SYMLINK libspdk_event_vmd.so 00:04:03.193 SYMLINK libspdk_event_iobuf.so 00:04:03.451 CC module/event/subsystems/accel/accel.o 00:04:03.451 LIB libspdk_event_accel.a 00:04:03.451 SO libspdk_event_accel.so.6.0 00:04:03.451 SYMLINK libspdk_event_accel.so 00:04:03.710 CC module/event/subsystems/bdev/bdev.o 00:04:03.968 LIB libspdk_event_bdev.a 00:04:03.968 SO libspdk_event_bdev.so.6.0 00:04:03.968 SYMLINK libspdk_event_bdev.so 00:04:03.968 CC module/event/subsystems/ublk/ublk.o 00:04:03.968 CC module/event/subsystems/scsi/scsi.o 00:04:04.226 CC module/event/subsystems/nbd/nbd.o 00:04:04.226 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:04.226 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:04.226 LIB libspdk_event_scsi.a 00:04:04.226 LIB libspdk_event_nbd.a 00:04:04.226 LIB libspdk_event_ublk.a 00:04:04.226 SO libspdk_event_scsi.so.6.0 00:04:04.226 SO libspdk_event_nbd.so.6.0 00:04:04.226 SO libspdk_event_ublk.so.3.0 00:04:04.226 SYMLINK libspdk_event_nbd.so 00:04:04.226 SYMLINK libspdk_event_scsi.so 00:04:04.226 LIB libspdk_event_nvmf.a 00:04:04.226 SYMLINK libspdk_event_ublk.so 00:04:04.226 SO libspdk_event_nvmf.so.6.0 00:04:04.226 SYMLINK libspdk_event_nvmf.so 00:04:04.483 CC module/event/subsystems/iscsi/iscsi.o 00:04:04.483 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:04.483 LIB libspdk_event_vhost_scsi.a 00:04:04.483 LIB libspdk_event_iscsi.a 00:04:04.483 SO libspdk_event_vhost_scsi.so.3.0 00:04:04.483 SO libspdk_event_iscsi.so.6.0 00:04:04.741 SYMLINK libspdk_event_vhost_scsi.so 00:04:04.741 SYMLINK libspdk_event_iscsi.so 00:04:04.741 SO libspdk.so.6.0 00:04:04.741 SYMLINK libspdk.so 00:04:04.999 TEST_HEADER include/spdk/accel.h 00:04:04.999 TEST_HEADER include/spdk/accel_module.h 00:04:04.999 TEST_HEADER include/spdk/assert.h 00:04:04.999 CXX app/trace/trace.o 00:04:04.999 TEST_HEADER include/spdk/barrier.h 00:04:04.999 TEST_HEADER include/spdk/base64.h 00:04:04.999 TEST_HEADER include/spdk/bdev.h 00:04:04.999 CC test/rpc_client/rpc_client_test.o 00:04:04.999 TEST_HEADER include/spdk/bdev_module.h 00:04:04.999 TEST_HEADER include/spdk/bdev_zone.h 00:04:04.999 TEST_HEADER include/spdk/bit_array.h 00:04:04.999 TEST_HEADER include/spdk/bit_pool.h 00:04:04.999 TEST_HEADER include/spdk/blob_bdev.h 00:04:04.999 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:04.999 TEST_HEADER include/spdk/blobfs.h 00:04:04.999 TEST_HEADER include/spdk/blob.h 00:04:04.999 TEST_HEADER include/spdk/conf.h 00:04:04.999 TEST_HEADER include/spdk/config.h 00:04:04.999 TEST_HEADER include/spdk/cpuset.h 00:04:04.999 TEST_HEADER include/spdk/crc16.h 00:04:04.999 TEST_HEADER include/spdk/crc32.h 00:04:04.999 TEST_HEADER include/spdk/crc64.h 00:04:04.999 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:04.999 TEST_HEADER include/spdk/dif.h 00:04:04.999 TEST_HEADER include/spdk/dma.h 00:04:04.999 TEST_HEADER include/spdk/endian.h 00:04:04.999 TEST_HEADER include/spdk/env_dpdk.h 00:04:04.999 TEST_HEADER include/spdk/env.h 00:04:04.999 TEST_HEADER include/spdk/event.h 00:04:04.999 TEST_HEADER include/spdk/fd_group.h 00:04:04.999 TEST_HEADER include/spdk/fd.h 00:04:04.999 TEST_HEADER include/spdk/file.h 00:04:04.999 TEST_HEADER include/spdk/fsdev.h 00:04:04.999 TEST_HEADER include/spdk/fsdev_module.h 00:04:04.999 TEST_HEADER include/spdk/ftl.h 00:04:04.999 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:04.999 TEST_HEADER include/spdk/gpt_spec.h 00:04:04.999 TEST_HEADER include/spdk/hexlify.h 00:04:04.999 TEST_HEADER include/spdk/histogram_data.h 00:04:04.999 TEST_HEADER include/spdk/idxd.h 00:04:04.999 TEST_HEADER include/spdk/idxd_spec.h 00:04:04.999 TEST_HEADER include/spdk/init.h 00:04:04.999 CC test/thread/poller_perf/poller_perf.o 00:04:04.999 TEST_HEADER include/spdk/ioat.h 00:04:04.999 TEST_HEADER include/spdk/ioat_spec.h 00:04:04.999 CC examples/ioat/perf/perf.o 00:04:04.999 TEST_HEADER include/spdk/iscsi_spec.h 00:04:04.999 TEST_HEADER include/spdk/json.h 00:04:04.999 TEST_HEADER include/spdk/jsonrpc.h 00:04:04.999 TEST_HEADER include/spdk/keyring.h 00:04:04.999 TEST_HEADER include/spdk/keyring_module.h 00:04:04.999 CC examples/util/zipf/zipf.o 00:04:04.999 TEST_HEADER include/spdk/likely.h 00:04:04.999 TEST_HEADER include/spdk/log.h 00:04:04.999 TEST_HEADER include/spdk/lvol.h 00:04:04.999 TEST_HEADER include/spdk/md5.h 00:04:04.999 TEST_HEADER include/spdk/memory.h 00:04:04.999 TEST_HEADER include/spdk/mmio.h 00:04:04.999 TEST_HEADER include/spdk/nbd.h 00:04:04.999 TEST_HEADER include/spdk/net.h 00:04:04.999 TEST_HEADER include/spdk/notify.h 00:04:04.999 TEST_HEADER include/spdk/nvme.h 00:04:04.999 TEST_HEADER include/spdk/nvme_intel.h 00:04:04.999 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:04.999 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:04.999 CC test/dma/test_dma/test_dma.o 00:04:04.999 TEST_HEADER include/spdk/nvme_spec.h 00:04:04.999 CC test/app/bdev_svc/bdev_svc.o 00:04:04.999 TEST_HEADER include/spdk/nvme_zns.h 00:04:04.999 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:04.999 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:04.999 TEST_HEADER include/spdk/nvmf.h 00:04:04.999 TEST_HEADER include/spdk/nvmf_spec.h 00:04:04.999 CC test/env/mem_callbacks/mem_callbacks.o 00:04:04.999 TEST_HEADER include/spdk/nvmf_transport.h 00:04:05.000 TEST_HEADER include/spdk/opal.h 00:04:05.000 TEST_HEADER include/spdk/opal_spec.h 00:04:05.000 TEST_HEADER include/spdk/pci_ids.h 00:04:05.000 TEST_HEADER include/spdk/pipe.h 00:04:05.000 TEST_HEADER include/spdk/queue.h 00:04:05.000 TEST_HEADER include/spdk/reduce.h 00:04:05.000 TEST_HEADER include/spdk/rpc.h 00:04:05.000 TEST_HEADER include/spdk/scheduler.h 00:04:05.000 TEST_HEADER include/spdk/scsi.h 00:04:05.000 TEST_HEADER include/spdk/scsi_spec.h 00:04:05.000 TEST_HEADER include/spdk/sock.h 00:04:05.000 TEST_HEADER include/spdk/stdinc.h 00:04:05.000 TEST_HEADER include/spdk/string.h 00:04:05.000 TEST_HEADER include/spdk/thread.h 00:04:05.000 TEST_HEADER include/spdk/trace.h 00:04:05.000 TEST_HEADER include/spdk/trace_parser.h 00:04:05.000 TEST_HEADER include/spdk/tree.h 00:04:05.000 TEST_HEADER include/spdk/ublk.h 00:04:05.000 TEST_HEADER include/spdk/util.h 00:04:05.000 TEST_HEADER include/spdk/uuid.h 00:04:05.000 TEST_HEADER include/spdk/version.h 00:04:05.000 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:05.000 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:05.000 TEST_HEADER include/spdk/vhost.h 00:04:05.000 TEST_HEADER include/spdk/vmd.h 00:04:05.000 TEST_HEADER include/spdk/xor.h 00:04:05.000 TEST_HEADER include/spdk/zipf.h 00:04:05.000 CXX test/cpp_headers/accel.o 00:04:05.257 LINK zipf 00:04:05.257 LINK interrupt_tgt 00:04:05.257 LINK rpc_client_test 00:04:05.257 LINK poller_perf 00:04:05.257 LINK ioat_perf 00:04:05.257 LINK bdev_svc 00:04:05.257 CXX test/cpp_headers/accel_module.o 00:04:05.257 LINK spdk_trace 00:04:05.257 CC examples/ioat/verify/verify.o 00:04:05.257 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:05.257 CXX test/cpp_headers/assert.o 00:04:05.257 CC test/env/vtophys/vtophys.o 00:04:05.257 CC app/trace_record/trace_record.o 00:04:05.515 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:05.515 CXX test/cpp_headers/barrier.o 00:04:05.515 CC examples/thread/thread/thread_ex.o 00:04:05.515 LINK vtophys 00:04:05.515 LINK env_dpdk_post_init 00:04:05.515 CC test/app/histogram_perf/histogram_perf.o 00:04:05.515 LINK test_dma 00:04:05.515 LINK verify 00:04:05.515 LINK spdk_trace_record 00:04:05.515 LINK mem_callbacks 00:04:05.515 CXX test/cpp_headers/base64.o 00:04:05.515 CC test/app/jsoncat/jsoncat.o 00:04:05.774 LINK histogram_perf 00:04:05.774 CC test/app/stub/stub.o 00:04:05.774 CXX test/cpp_headers/bdev.o 00:04:05.774 LINK thread 00:04:05.774 CC test/env/memory/memory_ut.o 00:04:05.774 LINK jsoncat 00:04:05.774 CC app/nvmf_tgt/nvmf_main.o 00:04:05.774 CC examples/sock/hello_world/hello_sock.o 00:04:05.774 LINK nvme_fuzz 00:04:05.774 CXX test/cpp_headers/bdev_module.o 00:04:05.774 LINK stub 00:04:06.032 LINK nvmf_tgt 00:04:06.032 CC test/env/pci/pci_ut.o 00:04:06.032 CC test/nvme/aer/aer.o 00:04:06.032 CC test/event/event_perf/event_perf.o 00:04:06.032 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:06.032 CXX test/cpp_headers/bdev_zone.o 00:04:06.032 CC examples/vmd/lsvmd/lsvmd.o 00:04:06.032 LINK hello_sock 00:04:06.032 LINK event_perf 00:04:06.032 CC examples/vmd/led/led.o 00:04:06.032 LINK lsvmd 00:04:06.032 CXX test/cpp_headers/bit_array.o 00:04:06.291 LINK aer 00:04:06.291 LINK led 00:04:06.291 CC app/iscsi_tgt/iscsi_tgt.o 00:04:06.291 CC test/event/reactor/reactor.o 00:04:06.291 CC examples/idxd/perf/perf.o 00:04:06.291 CXX test/cpp_headers/bit_pool.o 00:04:06.291 LINK reactor 00:04:06.291 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:06.291 LINK pci_ut 00:04:06.291 CC test/event/reactor_perf/reactor_perf.o 00:04:06.291 LINK iscsi_tgt 00:04:06.291 CC test/nvme/reset/reset.o 00:04:06.291 CXX test/cpp_headers/blob_bdev.o 00:04:06.549 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:06.549 LINK reactor_perf 00:04:06.549 CC test/event/app_repeat/app_repeat.o 00:04:06.549 LINK idxd_perf 00:04:06.549 CXX test/cpp_headers/blobfs_bdev.o 00:04:06.549 LINK reset 00:04:06.549 CC test/event/scheduler/scheduler.o 00:04:06.549 CC app/spdk_tgt/spdk_tgt.o 00:04:06.549 LINK app_repeat 00:04:06.807 CXX test/cpp_headers/blobfs.o 00:04:06.807 CC app/spdk_lspci/spdk_lspci.o 00:04:06.807 LINK vhost_fuzz 00:04:06.807 CC test/nvme/sgl/sgl.o 00:04:06.807 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:06.807 LINK memory_ut 00:04:06.807 LINK scheduler 00:04:06.807 LINK spdk_tgt 00:04:06.807 LINK spdk_lspci 00:04:06.807 CXX test/cpp_headers/blob.o 00:04:06.807 CC test/nvme/e2edp/nvme_dp.o 00:04:06.807 CC test/nvme/overhead/overhead.o 00:04:07.066 LINK sgl 00:04:07.066 CXX test/cpp_headers/conf.o 00:04:07.066 LINK hello_fsdev 00:04:07.066 CC app/spdk_nvme_perf/perf.o 00:04:07.066 CC test/nvme/err_injection/err_injection.o 00:04:07.066 CC test/accel/dif/dif.o 00:04:07.066 LINK nvme_dp 00:04:07.066 CC examples/accel/perf/accel_perf.o 00:04:07.066 CXX test/cpp_headers/config.o 00:04:07.066 CC test/nvme/startup/startup.o 00:04:07.066 CXX test/cpp_headers/cpuset.o 00:04:07.066 LINK overhead 00:04:07.066 CXX test/cpp_headers/crc16.o 00:04:07.324 LINK err_injection 00:04:07.324 CC test/nvme/reserve/reserve.o 00:04:07.324 CXX test/cpp_headers/crc32.o 00:04:07.324 LINK startup 00:04:07.324 CC test/nvme/simple_copy/simple_copy.o 00:04:07.324 CC test/nvme/connect_stress/connect_stress.o 00:04:07.324 CC test/nvme/boot_partition/boot_partition.o 00:04:07.324 LINK reserve 00:04:07.324 CXX test/cpp_headers/crc64.o 00:04:07.582 LINK boot_partition 00:04:07.582 CXX test/cpp_headers/dif.o 00:04:07.582 LINK connect_stress 00:04:07.582 LINK simple_copy 00:04:07.582 CC test/blobfs/mkfs/mkfs.o 00:04:07.582 LINK accel_perf 00:04:07.582 CC test/nvme/compliance/nvme_compliance.o 00:04:07.582 CXX test/cpp_headers/dma.o 00:04:07.582 LINK dif 00:04:07.582 LINK iscsi_fuzz 00:04:07.840 CC test/nvme/fused_ordering/fused_ordering.o 00:04:07.840 CC examples/blob/hello_world/hello_blob.o 00:04:07.840 CXX test/cpp_headers/endian.o 00:04:07.840 CC examples/blob/cli/blobcli.o 00:04:07.840 LINK mkfs 00:04:07.840 CXX test/cpp_headers/env_dpdk.o 00:04:07.840 LINK spdk_nvme_perf 00:04:07.840 CC test/lvol/esnap/esnap.o 00:04:07.840 CXX test/cpp_headers/env.o 00:04:07.840 LINK fused_ordering 00:04:07.840 LINK nvme_compliance 00:04:07.840 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:07.840 LINK hello_blob 00:04:08.097 CC app/spdk_nvme_identify/identify.o 00:04:08.097 CC app/spdk_nvme_discover/discovery_aer.o 00:04:08.097 CXX test/cpp_headers/event.o 00:04:08.097 LINK doorbell_aers 00:04:08.097 CXX test/cpp_headers/fd_group.o 00:04:08.097 CC test/nvme/cuse/cuse.o 00:04:08.097 CC test/nvme/fdp/fdp.o 00:04:08.097 CC test/bdev/bdevio/bdevio.o 00:04:08.097 LINK spdk_nvme_discover 00:04:08.097 LINK blobcli 00:04:08.097 CXX test/cpp_headers/fd.o 00:04:08.355 CXX test/cpp_headers/file.o 00:04:08.355 CXX test/cpp_headers/fsdev.o 00:04:08.355 CXX test/cpp_headers/fsdev_module.o 00:04:08.355 CC app/spdk_top/spdk_top.o 00:04:08.355 LINK fdp 00:04:08.355 CXX test/cpp_headers/ftl.o 00:04:08.355 CC examples/nvme/hello_world/hello_world.o 00:04:08.648 CC app/vhost/vhost.o 00:04:08.648 CC examples/bdev/hello_world/hello_bdev.o 00:04:08.648 LINK bdevio 00:04:08.648 LINK spdk_nvme_identify 00:04:08.648 CXX test/cpp_headers/fuse_dispatcher.o 00:04:08.648 CC examples/nvme/reconnect/reconnect.o 00:04:08.648 LINK hello_world 00:04:08.648 LINK vhost 00:04:08.648 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:08.648 LINK hello_bdev 00:04:08.906 CXX test/cpp_headers/gpt_spec.o 00:04:08.906 CXX test/cpp_headers/hexlify.o 00:04:08.906 CC app/spdk_dd/spdk_dd.o 00:04:08.906 CXX test/cpp_headers/histogram_data.o 00:04:08.906 LINK reconnect 00:04:08.906 CC examples/nvme/arbitration/arbitration.o 00:04:08.906 CC examples/bdev/bdevperf/bdevperf.o 00:04:08.906 CXX test/cpp_headers/idxd.o 00:04:08.906 CC examples/nvme/hotplug/hotplug.o 00:04:09.165 LINK spdk_top 00:04:09.165 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:09.165 CXX test/cpp_headers/idxd_spec.o 00:04:09.165 CXX test/cpp_headers/init.o 00:04:09.165 LINK spdk_dd 00:04:09.165 LINK hotplug 00:04:09.165 LINK cmb_copy 00:04:09.165 CXX test/cpp_headers/ioat.o 00:04:09.165 LINK nvme_manage 00:04:09.165 LINK arbitration 00:04:09.423 CC examples/nvme/abort/abort.o 00:04:09.423 LINK cuse 00:04:09.423 CXX test/cpp_headers/ioat_spec.o 00:04:09.423 CXX test/cpp_headers/iscsi_spec.o 00:04:09.423 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:09.423 CXX test/cpp_headers/json.o 00:04:09.423 CC app/fio/nvme/fio_plugin.o 00:04:09.423 CXX test/cpp_headers/jsonrpc.o 00:04:09.423 CXX test/cpp_headers/keyring.o 00:04:09.423 CXX test/cpp_headers/keyring_module.o 00:04:09.423 LINK pmr_persistence 00:04:09.423 CC app/fio/bdev/fio_plugin.o 00:04:09.423 CXX test/cpp_headers/likely.o 00:04:09.423 CXX test/cpp_headers/log.o 00:04:09.681 CXX test/cpp_headers/lvol.o 00:04:09.681 CXX test/cpp_headers/md5.o 00:04:09.681 CXX test/cpp_headers/memory.o 00:04:09.681 CXX test/cpp_headers/mmio.o 00:04:09.681 CXX test/cpp_headers/nbd.o 00:04:09.681 CXX test/cpp_headers/net.o 00:04:09.681 LINK abort 00:04:09.681 CXX test/cpp_headers/notify.o 00:04:09.681 CXX test/cpp_headers/nvme.o 00:04:09.681 CXX test/cpp_headers/nvme_intel.o 00:04:09.681 LINK bdevperf 00:04:09.681 CXX test/cpp_headers/nvme_ocssd.o 00:04:09.681 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:09.681 CXX test/cpp_headers/nvme_spec.o 00:04:09.937 CXX test/cpp_headers/nvme_zns.o 00:04:09.937 LINK spdk_nvme 00:04:09.937 CXX test/cpp_headers/nvmf_cmd.o 00:04:09.937 LINK spdk_bdev 00:04:09.937 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:09.937 CXX test/cpp_headers/nvmf.o 00:04:09.937 CXX test/cpp_headers/nvmf_spec.o 00:04:09.937 CXX test/cpp_headers/nvmf_transport.o 00:04:09.937 CXX test/cpp_headers/opal.o 00:04:09.937 CXX test/cpp_headers/opal_spec.o 00:04:09.937 CXX test/cpp_headers/pci_ids.o 00:04:09.937 CXX test/cpp_headers/pipe.o 00:04:09.937 CXX test/cpp_headers/queue.o 00:04:09.937 CXX test/cpp_headers/reduce.o 00:04:09.937 CC examples/nvmf/nvmf/nvmf.o 00:04:09.937 CXX test/cpp_headers/rpc.o 00:04:09.937 CXX test/cpp_headers/scheduler.o 00:04:09.937 CXX test/cpp_headers/scsi.o 00:04:10.195 CXX test/cpp_headers/scsi_spec.o 00:04:10.195 CXX test/cpp_headers/sock.o 00:04:10.195 CXX test/cpp_headers/stdinc.o 00:04:10.195 CXX test/cpp_headers/string.o 00:04:10.195 CXX test/cpp_headers/thread.o 00:04:10.195 CXX test/cpp_headers/trace.o 00:04:10.195 CXX test/cpp_headers/trace_parser.o 00:04:10.195 CXX test/cpp_headers/tree.o 00:04:10.195 CXX test/cpp_headers/ublk.o 00:04:10.195 CXX test/cpp_headers/util.o 00:04:10.195 CXX test/cpp_headers/uuid.o 00:04:10.195 CXX test/cpp_headers/version.o 00:04:10.195 CXX test/cpp_headers/vfio_user_pci.o 00:04:10.195 CXX test/cpp_headers/vfio_user_spec.o 00:04:10.195 CXX test/cpp_headers/vhost.o 00:04:10.195 CXX test/cpp_headers/vmd.o 00:04:10.195 LINK nvmf 00:04:10.195 CXX test/cpp_headers/xor.o 00:04:10.195 CXX test/cpp_headers/zipf.o 00:04:12.093 LINK esnap 00:04:12.659 00:04:12.659 real 1m1.855s 00:04:12.659 user 5m6.855s 00:04:12.659 sys 0m51.454s 00:04:12.659 10:24:47 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:12.659 10:24:47 make -- common/autotest_common.sh@10 -- $ set +x 00:04:12.659 ************************************ 00:04:12.659 END TEST make 00:04:12.659 ************************************ 00:04:12.659 10:24:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:12.659 10:24:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:12.659 10:24:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:12.659 10:24:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.659 10:24:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:12.659 10:24:47 -- pm/common@44 -- $ pid=5797 00:04:12.659 10:24:47 -- pm/common@50 -- $ kill -TERM 5797 00:04:12.659 10:24:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.659 10:24:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:12.659 10:24:47 -- pm/common@44 -- $ pid=5799 00:04:12.659 10:24:47 -- pm/common@50 -- $ kill -TERM 5799 00:04:12.659 10:24:47 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:12.659 10:24:47 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:12.659 10:24:47 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:12.659 10:24:47 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:12.659 10:24:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.659 10:24:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.659 10:24:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.659 10:24:47 -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.659 10:24:47 -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.659 10:24:47 -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.659 10:24:47 -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.659 10:24:47 -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.659 10:24:47 -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.659 10:24:47 -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.659 10:24:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.659 10:24:47 -- scripts/common.sh@344 -- # case "$op" in 00:04:12.659 10:24:47 -- scripts/common.sh@345 -- # : 1 00:04:12.659 10:24:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.659 10:24:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.659 10:24:47 -- scripts/common.sh@365 -- # decimal 1 00:04:12.659 10:24:47 -- scripts/common.sh@353 -- # local d=1 00:04:12.659 10:24:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.659 10:24:47 -- scripts/common.sh@355 -- # echo 1 00:04:12.659 10:24:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.659 10:24:47 -- scripts/common.sh@366 -- # decimal 2 00:04:12.659 10:24:47 -- scripts/common.sh@353 -- # local d=2 00:04:12.659 10:24:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.659 10:24:47 -- scripts/common.sh@355 -- # echo 2 00:04:12.659 10:24:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.659 10:24:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.659 10:24:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.659 10:24:47 -- scripts/common.sh@368 -- # return 0 00:04:12.659 10:24:47 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.659 10:24:47 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.659 --rc genhtml_branch_coverage=1 00:04:12.659 --rc genhtml_function_coverage=1 00:04:12.659 --rc genhtml_legend=1 00:04:12.659 --rc geninfo_all_blocks=1 00:04:12.659 --rc geninfo_unexecuted_blocks=1 00:04:12.659 00:04:12.659 ' 00:04:12.659 10:24:47 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.659 --rc genhtml_branch_coverage=1 00:04:12.659 --rc genhtml_function_coverage=1 00:04:12.659 --rc genhtml_legend=1 00:04:12.659 --rc geninfo_all_blocks=1 00:04:12.659 --rc geninfo_unexecuted_blocks=1 00:04:12.659 00:04:12.659 ' 00:04:12.659 10:24:47 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.659 --rc genhtml_branch_coverage=1 00:04:12.659 --rc genhtml_function_coverage=1 00:04:12.659 --rc genhtml_legend=1 00:04:12.659 --rc geninfo_all_blocks=1 00:04:12.659 --rc geninfo_unexecuted_blocks=1 00:04:12.659 00:04:12.659 ' 00:04:12.659 10:24:47 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.659 --rc genhtml_branch_coverage=1 00:04:12.659 --rc genhtml_function_coverage=1 00:04:12.659 --rc genhtml_legend=1 00:04:12.659 --rc geninfo_all_blocks=1 00:04:12.659 --rc geninfo_unexecuted_blocks=1 00:04:12.659 00:04:12.659 ' 00:04:12.659 10:24:47 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:12.659 10:24:47 -- nvmf/common.sh@7 -- # uname -s 00:04:12.659 10:24:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.659 10:24:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.659 10:24:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.659 10:24:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.659 10:24:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.659 10:24:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.659 10:24:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.659 10:24:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.659 10:24:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.659 10:24:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.659 10:24:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3b890a9-2d28-4a32-bd03-591ea31d75ee 00:04:12.659 10:24:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=d3b890a9-2d28-4a32-bd03-591ea31d75ee 00:04:12.659 10:24:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.659 10:24:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.659 10:24:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:12.659 10:24:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.659 10:24:47 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:12.659 10:24:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:12.659 10:24:47 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.659 10:24:47 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.659 10:24:47 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.659 10:24:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.659 10:24:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.659 10:24:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.659 10:24:47 -- paths/export.sh@5 -- # export PATH 00:04:12.659 10:24:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.659 10:24:47 -- nvmf/common.sh@51 -- # : 0 00:04:12.659 10:24:47 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:12.659 10:24:47 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:12.659 10:24:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:12.659 10:24:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.659 10:24:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.659 10:24:47 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:12.659 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:12.659 10:24:47 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:12.659 10:24:47 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:12.659 10:24:47 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:12.659 10:24:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:12.659 10:24:47 -- spdk/autotest.sh@32 -- # uname -s 00:04:12.659 10:24:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:12.659 10:24:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:12.659 10:24:47 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:12.659 10:24:47 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:12.659 10:24:47 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:12.659 10:24:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:12.659 10:24:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:12.659 10:24:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:12.659 10:24:47 -- spdk/autotest.sh@48 -- # udevadm_pid=67870 00:04:12.659 10:24:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:12.659 10:24:47 -- pm/common@17 -- # local monitor 00:04:12.659 10:24:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.659 10:24:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.659 10:24:47 -- pm/common@25 -- # sleep 1 00:04:12.659 10:24:47 -- pm/common@21 -- # date +%s 00:04:12.659 10:24:47 -- pm/common@21 -- # date +%s 00:04:12.659 10:24:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:12.659 10:24:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727519087 00:04:12.659 10:24:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727519087 00:04:12.659 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727519087_collect-vmstat.pm.log 00:04:12.659 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727519087_collect-cpu-load.pm.log 00:04:14.032 10:24:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:14.032 10:24:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:14.032 10:24:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.032 10:24:48 -- common/autotest_common.sh@10 -- # set +x 00:04:14.032 10:24:48 -- spdk/autotest.sh@59 -- # create_test_list 00:04:14.032 10:24:48 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:14.032 10:24:48 -- common/autotest_common.sh@10 -- # set +x 00:04:14.032 10:24:48 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:14.032 10:24:48 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:14.032 10:24:48 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:14.032 10:24:48 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:14.032 10:24:48 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:14.032 10:24:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:14.032 10:24:48 -- common/autotest_common.sh@1455 -- # uname 00:04:14.032 10:24:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:14.032 10:24:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:14.032 10:24:48 -- common/autotest_common.sh@1475 -- # uname 00:04:14.032 10:24:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:14.032 10:24:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:14.032 10:24:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:14.032 lcov: LCOV version 1.15 00:04:14.032 10:24:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:28.943 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:28.943 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:43.854 10:25:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:43.854 10:25:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:43.854 10:25:17 -- common/autotest_common.sh@10 -- # set +x 00:04:43.854 10:25:17 -- spdk/autotest.sh@78 -- # rm -f 00:04:43.854 10:25:17 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.855 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:43.855 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:43.855 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:43.855 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:44.114 10:25:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:44.114 10:25:18 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:44.114 10:25:18 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:44.114 10:25:18 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:44.114 10:25:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.114 10:25:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:44.114 10:25:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:44.114 10:25:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.114 10:25:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:44.114 10:25:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:44.114 10:25:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.114 10:25:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:44.114 10:25:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:44.114 10:25:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.114 10:25:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:44.114 10:25:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:44.114 10:25:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.114 10:25:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2c2n1 00:04:44.114 10:25:18 -- common/autotest_common.sh@1648 -- # local device=nvme2c2n1 00:04:44.114 10:25:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.114 10:25:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:44.114 10:25:18 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:44.114 10:25:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.114 10:25:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:44.114 10:25:18 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:44.114 10:25:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:44.114 10:25:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.114 10:25:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:44.114 10:25:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.114 10:25:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.114 10:25:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:44.114 10:25:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:44.114 10:25:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:44.114 No valid GPT data, bailing 00:04:44.114 10:25:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:44.114 10:25:18 -- scripts/common.sh@394 -- # pt= 00:04:44.114 10:25:18 -- scripts/common.sh@395 -- # return 1 00:04:44.114 10:25:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:44.114 1+0 records in 00:04:44.114 1+0 records out 00:04:44.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234355 s, 44.7 MB/s 00:04:44.114 10:25:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.114 10:25:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.114 10:25:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:44.114 10:25:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:44.114 10:25:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:44.114 No valid GPT data, bailing 00:04:44.114 10:25:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:44.114 10:25:18 -- scripts/common.sh@394 -- # pt= 00:04:44.114 10:25:18 -- scripts/common.sh@395 -- # return 1 00:04:44.114 10:25:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:44.114 1+0 records in 00:04:44.114 1+0 records out 00:04:44.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457944 s, 229 MB/s 00:04:44.114 10:25:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.114 10:25:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.114 10:25:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:44.114 10:25:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:44.114 10:25:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:44.114 No valid GPT data, bailing 00:04:44.114 10:25:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:44.114 10:25:18 -- scripts/common.sh@394 -- # pt= 00:04:44.114 10:25:18 -- scripts/common.sh@395 -- # return 1 00:04:44.114 10:25:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:44.374 1+0 records in 00:04:44.374 1+0 records out 00:04:44.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00535904 s, 196 MB/s 00:04:44.374 10:25:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.375 10:25:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.375 10:25:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:44.375 10:25:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:44.375 10:25:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:44.375 No valid GPT data, bailing 00:04:44.375 10:25:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:44.375 10:25:18 -- scripts/common.sh@394 -- # pt= 00:04:44.375 10:25:18 -- scripts/common.sh@395 -- # return 1 00:04:44.375 10:25:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:44.375 1+0 records in 00:04:44.375 1+0 records out 00:04:44.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040075 s, 262 MB/s 00:04:44.375 10:25:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.375 10:25:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.375 10:25:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:44.375 10:25:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:44.375 10:25:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:44.375 No valid GPT data, bailing 00:04:44.375 10:25:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:44.375 10:25:19 -- scripts/common.sh@394 -- # pt= 00:04:44.375 10:25:19 -- scripts/common.sh@395 -- # return 1 00:04:44.375 10:25:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:44.375 1+0 records in 00:04:44.375 1+0 records out 00:04:44.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00364513 s, 288 MB/s 00:04:44.375 10:25:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.375 10:25:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.375 10:25:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:44.375 10:25:19 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:44.375 10:25:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:44.375 No valid GPT data, bailing 00:04:44.375 10:25:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:44.375 10:25:19 -- scripts/common.sh@394 -- # pt= 00:04:44.375 10:25:19 -- scripts/common.sh@395 -- # return 1 00:04:44.375 10:25:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:44.375 1+0 records in 00:04:44.375 1+0 records out 00:04:44.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536792 s, 195 MB/s 00:04:44.375 10:25:19 -- spdk/autotest.sh@105 -- # sync 00:04:44.634 10:25:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:44.634 10:25:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:44.635 10:25:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:46.542 10:25:20 -- spdk/autotest.sh@111 -- # uname -s 00:04:46.542 10:25:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:46.542 10:25:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:46.542 10:25:21 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:46.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.372 Hugepages 00:04:47.372 node hugesize free / total 00:04:47.372 node0 1048576kB 0 / 0 00:04:47.372 node0 2048kB 0 / 0 00:04:47.372 00:04:47.372 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:47.372 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:47.372 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:47.372 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:47.372 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:47.632 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:47.632 10:25:22 -- spdk/autotest.sh@117 -- # uname -s 00:04:47.632 10:25:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:47.632 10:25:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:47.632 10:25:22 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.772 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.772 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.772 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.772 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.772 10:25:23 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:49.712 10:25:24 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:49.712 10:25:24 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:49.712 10:25:24 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:49.712 10:25:24 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:49.712 10:25:24 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:49.712 10:25:24 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:49.712 10:25:24 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.712 10:25:24 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:49.712 10:25:24 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:49.973 10:25:24 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:49.973 10:25:24 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:49.973 10:25:24 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.495 Waiting for block devices as requested 00:04:50.495 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:50.495 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:50.495 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:50.755 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:56.045 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:56.045 10:25:30 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:56.045 10:25:30 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:56.045 10:25:30 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:56.045 10:25:30 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:56.045 10:25:30 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:56.045 10:25:30 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:56.045 10:25:30 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:56.045 10:25:30 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:56.045 10:25:30 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:56.045 10:25:30 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:56.045 10:25:30 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:56.045 10:25:30 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:56.045 10:25:30 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:56.045 10:25:30 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:56.045 10:25:30 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:56.045 10:25:30 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:56.045 10:25:30 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:56.045 10:25:30 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:56.045 10:25:30 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:56.045 10:25:30 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:56.045 10:25:30 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:56.045 10:25:30 -- common/autotest_common.sh@1541 -- # continue 00:04:56.045 10:25:30 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:56.045 10:25:30 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:56.045 10:25:30 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:56.045 10:25:30 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:56.045 10:25:30 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:56.045 10:25:30 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:56.046 10:25:30 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:56.046 10:25:30 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:56.046 10:25:30 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:56.046 10:25:30 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:56.046 10:25:30 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:56.046 10:25:30 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1541 -- # continue 00:04:56.046 10:25:30 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:56.046 10:25:30 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:56.046 10:25:30 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:56.046 10:25:30 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:56.046 10:25:30 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:56.046 10:25:30 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:56.046 10:25:30 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:56.046 10:25:30 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1541 -- # continue 00:04:56.046 10:25:30 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:56.046 10:25:30 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:56.046 10:25:30 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:56.046 10:25:30 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:56.046 10:25:30 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:56.046 10:25:30 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:56.046 10:25:30 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:56.046 10:25:30 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:56.046 10:25:30 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:56.046 10:25:30 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:56.046 10:25:30 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:56.046 10:25:30 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:56.046 10:25:30 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:56.046 10:25:30 -- common/autotest_common.sh@1541 -- # continue 00:04:56.046 10:25:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:56.046 10:25:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.046 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:04:56.046 10:25:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:56.046 10:25:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.046 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:04:56.046 10:25:30 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.307 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.878 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.878 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.139 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.139 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.139 10:25:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:57.139 10:25:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.139 10:25:31 -- common/autotest_common.sh@10 -- # set +x 00:04:57.139 10:25:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:57.139 10:25:31 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:57.139 10:25:31 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.139 10:25:31 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:57.139 10:25:31 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:57.139 10:25:31 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:57.139 10:25:31 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:57.139 10:25:31 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:57.139 10:25:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:57.139 10:25:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:57.139 10:25:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.140 10:25:31 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:57.140 10:25:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:57.140 10:25:31 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:57.140 10:25:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:57.140 10:25:31 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:57.140 10:25:31 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:57.140 10:25:31 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:57.140 10:25:31 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:57.140 10:25:31 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:57.140 10:25:31 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:57.140 10:25:31 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:57.140 10:25:31 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:57.140 10:25:31 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:57.140 10:25:31 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:57.140 10:25:31 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:57.140 10:25:31 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:57.140 10:25:31 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:57.140 10:25:31 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:57.140 10:25:31 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:57.140 10:25:31 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:57.140 10:25:31 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:57.140 10:25:31 -- common/autotest_common.sh@1570 -- # return 0 00:04:57.140 10:25:31 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:57.140 10:25:31 -- common/autotest_common.sh@1578 -- # return 0 00:04:57.140 10:25:31 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:57.140 10:25:31 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:57.140 10:25:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:57.140 10:25:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:57.140 10:25:31 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:57.140 10:25:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.140 10:25:31 -- common/autotest_common.sh@10 -- # set +x 00:04:57.140 10:25:31 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:57.140 10:25:31 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:57.140 10:25:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.140 10:25:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.140 10:25:31 -- common/autotest_common.sh@10 -- # set +x 00:04:57.400 ************************************ 00:04:57.400 START TEST env 00:04:57.400 ************************************ 00:04:57.400 10:25:31 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:57.400 * Looking for test storage... 00:04:57.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:57.400 10:25:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.400 10:25:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.400 10:25:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.400 10:25:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.400 10:25:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.400 10:25:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.400 10:25:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.400 10:25:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.400 10:25:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.400 10:25:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.400 10:25:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.400 10:25:32 env -- scripts/common.sh@344 -- # case "$op" in 00:04:57.400 10:25:32 env -- scripts/common.sh@345 -- # : 1 00:04:57.400 10:25:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.400 10:25:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.400 10:25:32 env -- scripts/common.sh@365 -- # decimal 1 00:04:57.400 10:25:32 env -- scripts/common.sh@353 -- # local d=1 00:04:57.400 10:25:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.400 10:25:32 env -- scripts/common.sh@355 -- # echo 1 00:04:57.400 10:25:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.400 10:25:32 env -- scripts/common.sh@366 -- # decimal 2 00:04:57.400 10:25:32 env -- scripts/common.sh@353 -- # local d=2 00:04:57.400 10:25:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.400 10:25:32 env -- scripts/common.sh@355 -- # echo 2 00:04:57.400 10:25:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.400 10:25:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.400 10:25:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.400 10:25:32 env -- scripts/common.sh@368 -- # return 0 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.400 --rc genhtml_branch_coverage=1 00:04:57.400 --rc genhtml_function_coverage=1 00:04:57.400 --rc genhtml_legend=1 00:04:57.400 --rc geninfo_all_blocks=1 00:04:57.400 --rc geninfo_unexecuted_blocks=1 00:04:57.400 00:04:57.400 ' 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.400 --rc genhtml_branch_coverage=1 00:04:57.400 --rc genhtml_function_coverage=1 00:04:57.400 --rc genhtml_legend=1 00:04:57.400 --rc geninfo_all_blocks=1 00:04:57.400 --rc geninfo_unexecuted_blocks=1 00:04:57.400 00:04:57.400 ' 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.400 --rc genhtml_branch_coverage=1 00:04:57.400 --rc genhtml_function_coverage=1 00:04:57.400 --rc genhtml_legend=1 00:04:57.400 --rc geninfo_all_blocks=1 00:04:57.400 --rc geninfo_unexecuted_blocks=1 00:04:57.400 00:04:57.400 ' 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.400 --rc genhtml_branch_coverage=1 00:04:57.400 --rc genhtml_function_coverage=1 00:04:57.400 --rc genhtml_legend=1 00:04:57.400 --rc geninfo_all_blocks=1 00:04:57.400 --rc geninfo_unexecuted_blocks=1 00:04:57.400 00:04:57.400 ' 00:04:57.400 10:25:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.400 10:25:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.400 10:25:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.400 ************************************ 00:04:57.400 START TEST env_memory 00:04:57.400 ************************************ 00:04:57.400 10:25:32 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:57.400 00:04:57.400 00:04:57.400 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.400 http://cunit.sourceforge.net/ 00:04:57.400 00:04:57.400 00:04:57.400 Suite: memory 00:04:57.400 Test: alloc and free memory map ...[2024-09-28 10:25:32.160575] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:57.659 passed 00:04:57.659 Test: mem map translation ...[2024-09-28 10:25:32.199868] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:57.659 [2024-09-28 10:25:32.199926] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:57.659 [2024-09-28 10:25:32.199995] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:57.659 [2024-09-28 10:25:32.200011] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:57.659 passed 00:04:57.659 Test: mem map registration ...[2024-09-28 10:25:32.268666] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:57.659 [2024-09-28 10:25:32.268718] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:57.659 passed 00:04:57.659 Test: mem map adjacent registrations ...passed 00:04:57.659 00:04:57.659 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.659 suites 1 1 n/a 0 0 00:04:57.659 tests 4 4 4 0 0 00:04:57.659 asserts 152 152 152 0 n/a 00:04:57.659 00:04:57.659 Elapsed time = 0.233 seconds 00:04:57.659 00:04:57.659 real 0m0.272s 00:04:57.659 user 0m0.239s 00:04:57.659 sys 0m0.023s 00:04:57.659 10:25:32 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.659 10:25:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:57.659 ************************************ 00:04:57.659 END TEST env_memory 00:04:57.659 ************************************ 00:04:57.659 10:25:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:57.659 10:25:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.659 10:25:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.659 10:25:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.919 ************************************ 00:04:57.919 START TEST env_vtophys 00:04:57.919 ************************************ 00:04:57.919 10:25:32 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:57.919 EAL: lib.eal log level changed from notice to debug 00:04:57.919 EAL: Detected lcore 0 as core 0 on socket 0 00:04:57.919 EAL: Detected lcore 1 as core 0 on socket 0 00:04:57.919 EAL: Detected lcore 2 as core 0 on socket 0 00:04:57.919 EAL: Detected lcore 3 as core 0 on socket 0 00:04:57.919 EAL: Detected lcore 4 as core 0 on socket 0 00:04:57.919 EAL: Detected lcore 5 as core 0 on socket 0 00:04:57.919 EAL: Detected lcore 6 as core 0 on socket 0 00:04:57.919 EAL: Detected lcore 7 as core 0 on socket 0 00:04:57.919 EAL: Detected lcore 8 as core 0 on socket 0 00:04:57.919 EAL: Detected lcore 9 as core 0 on socket 0 00:04:57.919 EAL: Maximum logical cores by configuration: 128 00:04:57.919 EAL: Detected CPU lcores: 10 00:04:57.919 EAL: Detected NUMA nodes: 1 00:04:57.919 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:04:57.919 EAL: Detected shared linkage of DPDK 00:04:57.919 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:04:57.919 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:04:57.919 EAL: Registered [vdev] bus. 00:04:57.919 EAL: bus.vdev log level changed from disabled to notice 00:04:57.919 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:04:57.919 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:04:57.919 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:57.919 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:57.919 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:04:57.919 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:04:57.919 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:04:57.919 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:04:57.919 EAL: No shared files mode enabled, IPC will be disabled 00:04:57.919 EAL: No shared files mode enabled, IPC is disabled 00:04:57.919 EAL: Selected IOVA mode 'PA' 00:04:57.919 EAL: Probing VFIO support... 00:04:57.919 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:57.919 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:57.919 EAL: Ask a virtual area of 0x2e000 bytes 00:04:57.919 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:57.919 EAL: Setting up physically contiguous memory... 00:04:57.919 EAL: Setting maximum number of open files to 524288 00:04:57.919 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:57.919 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:57.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.919 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:57.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.919 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:57.919 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:57.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.919 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:57.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.919 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:57.919 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:57.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.919 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:57.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.919 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:57.919 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:57.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.919 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:57.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.919 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:57.919 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:57.919 EAL: Hugepages will be freed exactly as allocated. 00:04:57.919 EAL: No shared files mode enabled, IPC is disabled 00:04:57.919 EAL: No shared files mode enabled, IPC is disabled 00:04:57.919 EAL: TSC frequency is ~2600000 KHz 00:04:57.919 EAL: Main lcore 0 is ready (tid=7fd25a2cfa40;cpuset=[0]) 00:04:57.920 EAL: Trying to obtain current memory policy. 00:04:57.920 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.920 EAL: Restoring previous memory policy: 0 00:04:57.920 EAL: request: mp_malloc_sync 00:04:57.920 EAL: No shared files mode enabled, IPC is disabled 00:04:57.920 EAL: Heap on socket 0 was expanded by 2MB 00:04:57.920 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:57.920 EAL: No shared files mode enabled, IPC is disabled 00:04:57.920 EAL: Mem event callback 'spdk:(nil)' registered 00:04:57.920 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:57.920 00:04:57.920 00:04:57.920 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.920 http://cunit.sourceforge.net/ 00:04:57.920 00:04:57.920 00:04:57.920 Suite: components_suite 00:04:58.492 Test: vtophys_malloc_test ...passed 00:04:58.492 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:58.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.492 EAL: Restoring previous memory policy: 4 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was expanded by 4MB 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was shrunk by 4MB 00:04:58.492 EAL: Trying to obtain current memory policy. 00:04:58.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.492 EAL: Restoring previous memory policy: 4 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was expanded by 6MB 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was shrunk by 6MB 00:04:58.492 EAL: Trying to obtain current memory policy. 00:04:58.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.492 EAL: Restoring previous memory policy: 4 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was expanded by 10MB 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was shrunk by 10MB 00:04:58.492 EAL: Trying to obtain current memory policy. 00:04:58.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.492 EAL: Restoring previous memory policy: 4 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was expanded by 18MB 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was shrunk by 18MB 00:04:58.492 EAL: Trying to obtain current memory policy. 00:04:58.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.492 EAL: Restoring previous memory policy: 4 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was expanded by 34MB 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was shrunk by 34MB 00:04:58.492 EAL: Trying to obtain current memory policy. 00:04:58.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.492 EAL: Restoring previous memory policy: 4 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was expanded by 66MB 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was shrunk by 66MB 00:04:58.492 EAL: Trying to obtain current memory policy. 00:04:58.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.492 EAL: Restoring previous memory policy: 4 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was expanded by 130MB 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was shrunk by 130MB 00:04:58.492 EAL: Trying to obtain current memory policy. 00:04:58.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.492 EAL: Restoring previous memory policy: 4 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was expanded by 258MB 00:04:58.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.492 EAL: request: mp_malloc_sync 00:04:58.492 EAL: No shared files mode enabled, IPC is disabled 00:04:58.492 EAL: Heap on socket 0 was shrunk by 258MB 00:04:58.492 EAL: Trying to obtain current memory policy. 00:04:58.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.754 EAL: Restoring previous memory policy: 4 00:04:58.754 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.754 EAL: request: mp_malloc_sync 00:04:58.754 EAL: No shared files mode enabled, IPC is disabled 00:04:58.754 EAL: Heap on socket 0 was expanded by 514MB 00:04:58.754 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.015 EAL: request: mp_malloc_sync 00:04:59.015 EAL: No shared files mode enabled, IPC is disabled 00:04:59.015 EAL: Heap on socket 0 was shrunk by 514MB 00:04:59.015 EAL: Trying to obtain current memory policy. 00:04:59.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.015 EAL: Restoring previous memory policy: 4 00:04:59.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.015 EAL: request: mp_malloc_sync 00:04:59.015 EAL: No shared files mode enabled, IPC is disabled 00:04:59.015 EAL: Heap on socket 0 was expanded by 1026MB 00:04:59.277 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.538 passed 00:04:59.538 00:04:59.538 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.538 suites 1 1 n/a 0 0 00:04:59.538 tests 2 2 2 0 0 00:04:59.538 asserts 5274 5274 5274 0 n/a 00:04:59.538 00:04:59.538 Elapsed time = 1.437 seconds 00:04:59.538 EAL: request: mp_malloc_sync 00:04:59.538 EAL: No shared files mode enabled, IPC is disabled 00:04:59.538 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:59.538 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.538 EAL: request: mp_malloc_sync 00:04:59.538 EAL: No shared files mode enabled, IPC is disabled 00:04:59.538 EAL: Heap on socket 0 was shrunk by 2MB 00:04:59.538 EAL: No shared files mode enabled, IPC is disabled 00:04:59.538 EAL: No shared files mode enabled, IPC is disabled 00:04:59.538 EAL: No shared files mode enabled, IPC is disabled 00:04:59.538 00:04:59.538 real 0m1.678s 00:04:59.538 user 0m0.700s 00:04:59.538 sys 0m0.837s 00:04:59.538 10:25:34 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.538 ************************************ 00:04:59.538 END TEST env_vtophys 00:04:59.538 ************************************ 00:04:59.538 10:25:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:59.538 10:25:34 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:59.538 10:25:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.538 10:25:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.538 10:25:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.538 ************************************ 00:04:59.538 START TEST env_pci 00:04:59.538 ************************************ 00:04:59.538 10:25:34 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:59.538 00:04:59.538 00:04:59.538 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.538 http://cunit.sourceforge.net/ 00:04:59.538 00:04:59.538 00:04:59.538 Suite: pci 00:04:59.538 Test: pci_hook ...[2024-09-28 10:25:34.211296] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70630 has claimed it 00:04:59.538 passed 00:04:59.538 00:04:59.538 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.538 suites 1 1 n/a 0 0 00:04:59.538 tests 1 1 1 0 0 00:04:59.538 asserts 25 25 25 0 n/a 00:04:59.538 00:04:59.538 Elapsed time = 0.005 seconds 00:04:59.538 EAL: Cannot find device (10000:00:01.0) 00:04:59.538 EAL: Failed to attach device on primary process 00:04:59.538 00:04:59.538 real 0m0.056s 00:04:59.538 user 0m0.023s 00:04:59.538 sys 0m0.032s 00:04:59.538 10:25:34 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.538 10:25:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:59.538 ************************************ 00:04:59.538 END TEST env_pci 00:04:59.538 ************************************ 00:04:59.538 10:25:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:59.538 10:25:34 env -- env/env.sh@15 -- # uname 00:04:59.538 10:25:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:59.538 10:25:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:59.538 10:25:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.538 10:25:34 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:59.538 10:25:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.538 10:25:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.538 ************************************ 00:04:59.538 START TEST env_dpdk_post_init 00:04:59.538 ************************************ 00:04:59.538 10:25:34 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.798 EAL: Detected CPU lcores: 10 00:04:59.798 EAL: Detected NUMA nodes: 1 00:04:59.798 EAL: Detected shared linkage of DPDK 00:04:59.798 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.798 EAL: Selected IOVA mode 'PA' 00:04:59.798 Starting DPDK initialization... 00:04:59.798 Starting SPDK post initialization... 00:04:59.798 SPDK NVMe probe 00:04:59.798 Attaching to 0000:00:10.0 00:04:59.798 Attaching to 0000:00:11.0 00:04:59.798 Attaching to 0000:00:12.0 00:04:59.798 Attaching to 0000:00:13.0 00:04:59.798 Attached to 0000:00:11.0 00:04:59.798 Attached to 0000:00:13.0 00:04:59.798 Attached to 0000:00:10.0 00:04:59.798 Attached to 0000:00:12.0 00:04:59.798 Cleaning up... 00:04:59.798 00:04:59.798 real 0m0.232s 00:04:59.798 user 0m0.060s 00:04:59.798 sys 0m0.072s 00:04:59.798 ************************************ 00:04:59.798 END TEST env_dpdk_post_init 00:04:59.798 ************************************ 00:04:59.798 10:25:34 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.798 10:25:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.059 10:25:34 env -- env/env.sh@26 -- # uname 00:05:00.059 10:25:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:00.059 10:25:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.059 10:25:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.059 10:25:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.059 10:25:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.059 ************************************ 00:05:00.059 START TEST env_mem_callbacks 00:05:00.059 ************************************ 00:05:00.059 10:25:34 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.059 EAL: Detected CPU lcores: 10 00:05:00.059 EAL: Detected NUMA nodes: 1 00:05:00.059 EAL: Detected shared linkage of DPDK 00:05:00.059 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.059 EAL: Selected IOVA mode 'PA' 00:05:00.059 00:05:00.059 00:05:00.059 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.059 http://cunit.sourceforge.net/ 00:05:00.059 00:05:00.059 00:05:00.059 Suite: memory 00:05:00.059 Test: test ... 00:05:00.059 register 0x200000200000 2097152 00:05:00.059 malloc 3145728 00:05:00.059 register 0x200000400000 4194304 00:05:00.059 buf 0x200000500000 len 3145728 PASSED 00:05:00.059 malloc 64 00:05:00.059 buf 0x2000004fff40 len 64 PASSED 00:05:00.059 malloc 4194304 00:05:00.059 register 0x200000800000 6291456 00:05:00.059 buf 0x200000a00000 len 4194304 PASSED 00:05:00.059 free 0x200000500000 3145728 00:05:00.059 free 0x2000004fff40 64 00:05:00.059 unregister 0x200000400000 4194304 PASSED 00:05:00.059 free 0x200000a00000 4194304 00:05:00.059 unregister 0x200000800000 6291456 PASSED 00:05:00.059 malloc 8388608 00:05:00.059 register 0x200000400000 10485760 00:05:00.059 buf 0x200000600000 len 8388608 PASSED 00:05:00.059 free 0x200000600000 8388608 00:05:00.059 unregister 0x200000400000 10485760 PASSED 00:05:00.059 passed 00:05:00.059 00:05:00.059 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.059 suites 1 1 n/a 0 0 00:05:00.059 tests 1 1 1 0 0 00:05:00.059 asserts 15 15 15 0 n/a 00:05:00.059 00:05:00.059 Elapsed time = 0.012 seconds 00:05:00.059 00:05:00.059 real 0m0.178s 00:05:00.059 user 0m0.027s 00:05:00.059 sys 0m0.048s 00:05:00.059 10:25:34 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.059 ************************************ 00:05:00.059 END TEST env_mem_callbacks 00:05:00.059 10:25:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:00.059 ************************************ 00:05:00.320 ************************************ 00:05:00.320 END TEST env 00:05:00.320 ************************************ 00:05:00.320 00:05:00.320 real 0m2.910s 00:05:00.320 user 0m1.224s 00:05:00.320 sys 0m1.220s 00:05:00.320 10:25:34 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.320 10:25:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.320 10:25:34 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:00.320 10:25:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.320 10:25:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.320 10:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:00.320 ************************************ 00:05:00.320 START TEST rpc 00:05:00.320 ************************************ 00:05:00.320 10:25:34 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:00.320 * Looking for test storage... 00:05:00.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.320 10:25:34 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:00.320 10:25:34 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:00.320 10:25:34 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:00.320 10:25:35 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:00.320 10:25:35 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.320 10:25:35 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.320 10:25:35 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.320 10:25:35 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.320 10:25:35 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.321 10:25:35 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.321 10:25:35 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.321 10:25:35 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.321 10:25:35 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.321 10:25:35 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.321 10:25:35 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.321 10:25:35 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.321 10:25:35 rpc -- scripts/common.sh@345 -- # : 1 00:05:00.321 10:25:35 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.321 10:25:35 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.321 10:25:35 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.321 10:25:35 rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.321 10:25:35 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.321 10:25:35 rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.321 10:25:35 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.321 10:25:35 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.321 10:25:35 rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.321 10:25:35 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.321 10:25:35 rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.321 10:25:35 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.321 10:25:35 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.321 10:25:35 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.321 10:25:35 rpc -- scripts/common.sh@368 -- # return 0 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:00.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.321 --rc genhtml_branch_coverage=1 00:05:00.321 --rc genhtml_function_coverage=1 00:05:00.321 --rc genhtml_legend=1 00:05:00.321 --rc geninfo_all_blocks=1 00:05:00.321 --rc geninfo_unexecuted_blocks=1 00:05:00.321 00:05:00.321 ' 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:00.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.321 --rc genhtml_branch_coverage=1 00:05:00.321 --rc genhtml_function_coverage=1 00:05:00.321 --rc genhtml_legend=1 00:05:00.321 --rc geninfo_all_blocks=1 00:05:00.321 --rc geninfo_unexecuted_blocks=1 00:05:00.321 00:05:00.321 ' 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:00.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.321 --rc genhtml_branch_coverage=1 00:05:00.321 --rc genhtml_function_coverage=1 00:05:00.321 --rc genhtml_legend=1 00:05:00.321 --rc geninfo_all_blocks=1 00:05:00.321 --rc geninfo_unexecuted_blocks=1 00:05:00.321 00:05:00.321 ' 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:00.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.321 --rc genhtml_branch_coverage=1 00:05:00.321 --rc genhtml_function_coverage=1 00:05:00.321 --rc genhtml_legend=1 00:05:00.321 --rc geninfo_all_blocks=1 00:05:00.321 --rc geninfo_unexecuted_blocks=1 00:05:00.321 00:05:00.321 ' 00:05:00.321 10:25:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70757 00:05:00.321 10:25:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.321 10:25:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70757 00:05:00.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@831 -- # '[' -z 70757 ']' 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.321 10:25:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.321 10:25:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:00.582 [2024-09-28 10:25:35.145812] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:00.582 [2024-09-28 10:25:35.145984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70757 ] 00:05:00.582 [2024-09-28 10:25:35.277899] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:00.582 [2024-09-28 10:25:35.297494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.582 [2024-09-28 10:25:35.348575] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:00.582 [2024-09-28 10:25:35.348641] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70757' to capture a snapshot of events at runtime. 00:05:00.582 [2024-09-28 10:25:35.348652] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:00.582 [2024-09-28 10:25:35.348665] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:00.582 [2024-09-28 10:25:35.348677] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70757 for offline analysis/debug. 00:05:00.582 [2024-09-28 10:25:35.348718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.528 10:25:35 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.528 10:25:35 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:01.528 10:25:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.528 10:25:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.528 10:25:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:01.528 10:25:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:01.528 10:25:35 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.528 10:25:35 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.528 10:25:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.528 ************************************ 00:05:01.528 START TEST rpc_integrity 00:05:01.528 ************************************ 00:05:01.528 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:01.528 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.528 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.528 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.528 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.528 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.528 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:01.528 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.528 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.528 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.528 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.528 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.528 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:01.528 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.528 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.528 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.528 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.528 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.528 { 00:05:01.528 "name": "Malloc0", 00:05:01.528 "aliases": [ 00:05:01.528 "0c143a76-7e99-477a-842d-774e7605e61b" 00:05:01.528 ], 00:05:01.528 "product_name": "Malloc disk", 00:05:01.528 "block_size": 512, 00:05:01.528 "num_blocks": 16384, 00:05:01.528 "uuid": "0c143a76-7e99-477a-842d-774e7605e61b", 00:05:01.528 "assigned_rate_limits": { 00:05:01.528 "rw_ios_per_sec": 0, 00:05:01.528 "rw_mbytes_per_sec": 0, 00:05:01.528 "r_mbytes_per_sec": 0, 00:05:01.528 "w_mbytes_per_sec": 0 00:05:01.528 }, 00:05:01.528 "claimed": false, 00:05:01.528 "zoned": false, 00:05:01.528 "supported_io_types": { 00:05:01.528 "read": true, 00:05:01.528 "write": true, 00:05:01.528 "unmap": true, 00:05:01.528 "flush": true, 00:05:01.528 "reset": true, 00:05:01.528 "nvme_admin": false, 00:05:01.528 "nvme_io": false, 00:05:01.528 "nvme_io_md": false, 00:05:01.528 "write_zeroes": true, 00:05:01.528 "zcopy": true, 00:05:01.528 "get_zone_info": false, 00:05:01.528 "zone_management": false, 00:05:01.528 "zone_append": false, 00:05:01.528 "compare": false, 00:05:01.528 "compare_and_write": false, 00:05:01.528 "abort": true, 00:05:01.528 "seek_hole": false, 00:05:01.528 "seek_data": false, 00:05:01.528 "copy": true, 00:05:01.528 "nvme_iov_md": false 00:05:01.528 }, 00:05:01.528 "memory_domains": [ 00:05:01.528 { 00:05:01.528 "dma_device_id": "system", 00:05:01.528 "dma_device_type": 1 00:05:01.528 }, 00:05:01.528 { 00:05:01.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.528 "dma_device_type": 2 00:05:01.528 } 00:05:01.528 ], 00:05:01.528 "driver_specific": {} 00:05:01.528 } 00:05:01.528 ]' 00:05:01.528 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.528 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.529 [2024-09-28 10:25:36.118899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:01.529 [2024-09-28 10:25:36.119004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.529 [2024-09-28 10:25:36.119032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:01.529 [2024-09-28 10:25:36.119045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.529 [2024-09-28 10:25:36.121677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.529 [2024-09-28 10:25:36.121740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.529 Passthru0 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.529 { 00:05:01.529 "name": "Malloc0", 00:05:01.529 "aliases": [ 00:05:01.529 "0c143a76-7e99-477a-842d-774e7605e61b" 00:05:01.529 ], 00:05:01.529 "product_name": "Malloc disk", 00:05:01.529 "block_size": 512, 00:05:01.529 "num_blocks": 16384, 00:05:01.529 "uuid": "0c143a76-7e99-477a-842d-774e7605e61b", 00:05:01.529 "assigned_rate_limits": { 00:05:01.529 "rw_ios_per_sec": 0, 00:05:01.529 "rw_mbytes_per_sec": 0, 00:05:01.529 "r_mbytes_per_sec": 0, 00:05:01.529 "w_mbytes_per_sec": 0 00:05:01.529 }, 00:05:01.529 "claimed": true, 00:05:01.529 "claim_type": "exclusive_write", 00:05:01.529 "zoned": false, 00:05:01.529 "supported_io_types": { 00:05:01.529 "read": true, 00:05:01.529 "write": true, 00:05:01.529 "unmap": true, 00:05:01.529 "flush": true, 00:05:01.529 "reset": true, 00:05:01.529 "nvme_admin": false, 00:05:01.529 "nvme_io": false, 00:05:01.529 "nvme_io_md": false, 00:05:01.529 "write_zeroes": true, 00:05:01.529 "zcopy": true, 00:05:01.529 "get_zone_info": false, 00:05:01.529 "zone_management": false, 00:05:01.529 "zone_append": false, 00:05:01.529 "compare": false, 00:05:01.529 "compare_and_write": false, 00:05:01.529 "abort": true, 00:05:01.529 "seek_hole": false, 00:05:01.529 "seek_data": false, 00:05:01.529 "copy": true, 00:05:01.529 "nvme_iov_md": false 00:05:01.529 }, 00:05:01.529 "memory_domains": [ 00:05:01.529 { 00:05:01.529 "dma_device_id": "system", 00:05:01.529 "dma_device_type": 1 00:05:01.529 }, 00:05:01.529 { 00:05:01.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.529 "dma_device_type": 2 00:05:01.529 } 00:05:01.529 ], 00:05:01.529 "driver_specific": {} 00:05:01.529 }, 00:05:01.529 { 00:05:01.529 "name": "Passthru0", 00:05:01.529 "aliases": [ 00:05:01.529 "755025db-eca7-5298-941c-05ceee09d815" 00:05:01.529 ], 00:05:01.529 "product_name": "passthru", 00:05:01.529 "block_size": 512, 00:05:01.529 "num_blocks": 16384, 00:05:01.529 "uuid": "755025db-eca7-5298-941c-05ceee09d815", 00:05:01.529 "assigned_rate_limits": { 00:05:01.529 "rw_ios_per_sec": 0, 00:05:01.529 "rw_mbytes_per_sec": 0, 00:05:01.529 "r_mbytes_per_sec": 0, 00:05:01.529 "w_mbytes_per_sec": 0 00:05:01.529 }, 00:05:01.529 "claimed": false, 00:05:01.529 "zoned": false, 00:05:01.529 "supported_io_types": { 00:05:01.529 "read": true, 00:05:01.529 "write": true, 00:05:01.529 "unmap": true, 00:05:01.529 "flush": true, 00:05:01.529 "reset": true, 00:05:01.529 "nvme_admin": false, 00:05:01.529 "nvme_io": false, 00:05:01.529 "nvme_io_md": false, 00:05:01.529 "write_zeroes": true, 00:05:01.529 "zcopy": true, 00:05:01.529 "get_zone_info": false, 00:05:01.529 "zone_management": false, 00:05:01.529 "zone_append": false, 00:05:01.529 "compare": false, 00:05:01.529 "compare_and_write": false, 00:05:01.529 "abort": true, 00:05:01.529 "seek_hole": false, 00:05:01.529 "seek_data": false, 00:05:01.529 "copy": true, 00:05:01.529 "nvme_iov_md": false 00:05:01.529 }, 00:05:01.529 "memory_domains": [ 00:05:01.529 { 00:05:01.529 "dma_device_id": "system", 00:05:01.529 "dma_device_type": 1 00:05:01.529 }, 00:05:01.529 { 00:05:01.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.529 "dma_device_type": 2 00:05:01.529 } 00:05:01.529 ], 00:05:01.529 "driver_specific": { 00:05:01.529 "passthru": { 00:05:01.529 "name": "Passthru0", 00:05:01.529 "base_bdev_name": "Malloc0" 00:05:01.529 } 00:05:01.529 } 00:05:01.529 } 00:05:01.529 ]' 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.529 ************************************ 00:05:01.529 END TEST rpc_integrity 00:05:01.529 ************************************ 00:05:01.529 10:25:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.529 00:05:01.529 real 0m0.232s 00:05:01.529 user 0m0.135s 00:05:01.529 sys 0m0.029s 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.529 10:25:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.529 10:25:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:01.529 10:25:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.529 10:25:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.529 10:25:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.529 ************************************ 00:05:01.529 START TEST rpc_plugins 00:05:01.529 ************************************ 00:05:01.529 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:01.529 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:01.529 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.529 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.792 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.792 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:01.792 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:01.792 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.792 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.792 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.792 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:01.792 { 00:05:01.792 "name": "Malloc1", 00:05:01.792 "aliases": [ 00:05:01.792 "f7967c12-6919-426a-96c5-1b51dac970ff" 00:05:01.792 ], 00:05:01.792 "product_name": "Malloc disk", 00:05:01.792 "block_size": 4096, 00:05:01.792 "num_blocks": 256, 00:05:01.792 "uuid": "f7967c12-6919-426a-96c5-1b51dac970ff", 00:05:01.792 "assigned_rate_limits": { 00:05:01.792 "rw_ios_per_sec": 0, 00:05:01.792 "rw_mbytes_per_sec": 0, 00:05:01.792 "r_mbytes_per_sec": 0, 00:05:01.792 "w_mbytes_per_sec": 0 00:05:01.792 }, 00:05:01.792 "claimed": false, 00:05:01.792 "zoned": false, 00:05:01.792 "supported_io_types": { 00:05:01.792 "read": true, 00:05:01.792 "write": true, 00:05:01.792 "unmap": true, 00:05:01.792 "flush": true, 00:05:01.792 "reset": true, 00:05:01.792 "nvme_admin": false, 00:05:01.792 "nvme_io": false, 00:05:01.792 "nvme_io_md": false, 00:05:01.792 "write_zeroes": true, 00:05:01.792 "zcopy": true, 00:05:01.792 "get_zone_info": false, 00:05:01.792 "zone_management": false, 00:05:01.792 "zone_append": false, 00:05:01.792 "compare": false, 00:05:01.792 "compare_and_write": false, 00:05:01.792 "abort": true, 00:05:01.792 "seek_hole": false, 00:05:01.792 "seek_data": false, 00:05:01.792 "copy": true, 00:05:01.792 "nvme_iov_md": false 00:05:01.792 }, 00:05:01.792 "memory_domains": [ 00:05:01.792 { 00:05:01.792 "dma_device_id": "system", 00:05:01.792 "dma_device_type": 1 00:05:01.792 }, 00:05:01.792 { 00:05:01.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.792 "dma_device_type": 2 00:05:01.793 } 00:05:01.793 ], 00:05:01.793 "driver_specific": {} 00:05:01.793 } 00:05:01.793 ]' 00:05:01.793 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:01.793 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:01.793 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:01.793 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.793 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.793 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.793 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:01.793 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.793 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.793 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.793 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:01.793 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:01.793 ************************************ 00:05:01.793 END TEST rpc_plugins 00:05:01.793 ************************************ 00:05:01.793 10:25:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:01.793 00:05:01.793 real 0m0.115s 00:05:01.793 user 0m0.055s 00:05:01.793 sys 0m0.020s 00:05:01.793 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.793 10:25:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.793 10:25:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:01.793 10:25:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.793 10:25:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.793 10:25:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.793 ************************************ 00:05:01.793 START TEST rpc_trace_cmd_test 00:05:01.793 ************************************ 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:01.793 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70757", 00:05:01.793 "tpoint_group_mask": "0x8", 00:05:01.793 "iscsi_conn": { 00:05:01.793 "mask": "0x2", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "scsi": { 00:05:01.793 "mask": "0x4", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "bdev": { 00:05:01.793 "mask": "0x8", 00:05:01.793 "tpoint_mask": "0xffffffffffffffff" 00:05:01.793 }, 00:05:01.793 "nvmf_rdma": { 00:05:01.793 "mask": "0x10", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "nvmf_tcp": { 00:05:01.793 "mask": "0x20", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "ftl": { 00:05:01.793 "mask": "0x40", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "blobfs": { 00:05:01.793 "mask": "0x80", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "dsa": { 00:05:01.793 "mask": "0x200", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "thread": { 00:05:01.793 "mask": "0x400", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "nvme_pcie": { 00:05:01.793 "mask": "0x800", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "iaa": { 00:05:01.793 "mask": "0x1000", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "nvme_tcp": { 00:05:01.793 "mask": "0x2000", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "bdev_nvme": { 00:05:01.793 "mask": "0x4000", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "sock": { 00:05:01.793 "mask": "0x8000", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "blob": { 00:05:01.793 "mask": "0x10000", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 }, 00:05:01.793 "bdev_raid": { 00:05:01.793 "mask": "0x20000", 00:05:01.793 "tpoint_mask": "0x0" 00:05:01.793 } 00:05:01.793 }' 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:01.793 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:02.055 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:02.055 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:02.055 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:02.055 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:02.055 ************************************ 00:05:02.055 END TEST rpc_trace_cmd_test 00:05:02.055 ************************************ 00:05:02.055 10:25:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:02.055 00:05:02.055 real 0m0.176s 00:05:02.055 user 0m0.144s 00:05:02.055 sys 0m0.019s 00:05:02.055 10:25:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.055 10:25:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.055 10:25:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:02.055 10:25:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:02.055 10:25:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:02.055 10:25:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.055 10:25:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.055 10:25:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.055 ************************************ 00:05:02.055 START TEST rpc_daemon_integrity 00:05:02.055 ************************************ 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.055 { 00:05:02.055 "name": "Malloc2", 00:05:02.055 "aliases": [ 00:05:02.055 "c9b7a6ad-12d7-43be-a00c-317e5a3df873" 00:05:02.055 ], 00:05:02.055 "product_name": "Malloc disk", 00:05:02.055 "block_size": 512, 00:05:02.055 "num_blocks": 16384, 00:05:02.055 "uuid": "c9b7a6ad-12d7-43be-a00c-317e5a3df873", 00:05:02.055 "assigned_rate_limits": { 00:05:02.055 "rw_ios_per_sec": 0, 00:05:02.055 "rw_mbytes_per_sec": 0, 00:05:02.055 "r_mbytes_per_sec": 0, 00:05:02.055 "w_mbytes_per_sec": 0 00:05:02.055 }, 00:05:02.055 "claimed": false, 00:05:02.055 "zoned": false, 00:05:02.055 "supported_io_types": { 00:05:02.055 "read": true, 00:05:02.055 "write": true, 00:05:02.055 "unmap": true, 00:05:02.055 "flush": true, 00:05:02.055 "reset": true, 00:05:02.055 "nvme_admin": false, 00:05:02.055 "nvme_io": false, 00:05:02.055 "nvme_io_md": false, 00:05:02.055 "write_zeroes": true, 00:05:02.055 "zcopy": true, 00:05:02.055 "get_zone_info": false, 00:05:02.055 "zone_management": false, 00:05:02.055 "zone_append": false, 00:05:02.055 "compare": false, 00:05:02.055 "compare_and_write": false, 00:05:02.055 "abort": true, 00:05:02.055 "seek_hole": false, 00:05:02.055 "seek_data": false, 00:05:02.055 "copy": true, 00:05:02.055 "nvme_iov_md": false 00:05:02.055 }, 00:05:02.055 "memory_domains": [ 00:05:02.055 { 00:05:02.055 "dma_device_id": "system", 00:05:02.055 "dma_device_type": 1 00:05:02.055 }, 00:05:02.055 { 00:05:02.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.055 "dma_device_type": 2 00:05:02.055 } 00:05:02.055 ], 00:05:02.055 "driver_specific": {} 00:05:02.055 } 00:05:02.055 ]' 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.055 [2024-09-28 10:25:36.812325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:02.055 [2024-09-28 10:25:36.812549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.055 [2024-09-28 10:25:36.812580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:02.055 [2024-09-28 10:25:36.812593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.055 [2024-09-28 10:25:36.815156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.055 [2024-09-28 10:25:36.815209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.055 Passthru0 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.055 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.319 { 00:05:02.319 "name": "Malloc2", 00:05:02.319 "aliases": [ 00:05:02.319 "c9b7a6ad-12d7-43be-a00c-317e5a3df873" 00:05:02.319 ], 00:05:02.319 "product_name": "Malloc disk", 00:05:02.319 "block_size": 512, 00:05:02.319 "num_blocks": 16384, 00:05:02.319 "uuid": "c9b7a6ad-12d7-43be-a00c-317e5a3df873", 00:05:02.319 "assigned_rate_limits": { 00:05:02.319 "rw_ios_per_sec": 0, 00:05:02.319 "rw_mbytes_per_sec": 0, 00:05:02.319 "r_mbytes_per_sec": 0, 00:05:02.319 "w_mbytes_per_sec": 0 00:05:02.319 }, 00:05:02.319 "claimed": true, 00:05:02.319 "claim_type": "exclusive_write", 00:05:02.319 "zoned": false, 00:05:02.319 "supported_io_types": { 00:05:02.319 "read": true, 00:05:02.319 "write": true, 00:05:02.319 "unmap": true, 00:05:02.319 "flush": true, 00:05:02.319 "reset": true, 00:05:02.319 "nvme_admin": false, 00:05:02.319 "nvme_io": false, 00:05:02.319 "nvme_io_md": false, 00:05:02.319 "write_zeroes": true, 00:05:02.319 "zcopy": true, 00:05:02.319 "get_zone_info": false, 00:05:02.319 "zone_management": false, 00:05:02.319 "zone_append": false, 00:05:02.319 "compare": false, 00:05:02.319 "compare_and_write": false, 00:05:02.319 "abort": true, 00:05:02.319 "seek_hole": false, 00:05:02.319 "seek_data": false, 00:05:02.319 "copy": true, 00:05:02.319 "nvme_iov_md": false 00:05:02.319 }, 00:05:02.319 "memory_domains": [ 00:05:02.319 { 00:05:02.319 "dma_device_id": "system", 00:05:02.319 "dma_device_type": 1 00:05:02.319 }, 00:05:02.319 { 00:05:02.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.319 "dma_device_type": 2 00:05:02.319 } 00:05:02.319 ], 00:05:02.319 "driver_specific": {} 00:05:02.319 }, 00:05:02.319 { 00:05:02.319 "name": "Passthru0", 00:05:02.319 "aliases": [ 00:05:02.319 "2ee14e76-26da-5882-942b-2ada2b2ba422" 00:05:02.319 ], 00:05:02.319 "product_name": "passthru", 00:05:02.319 "block_size": 512, 00:05:02.319 "num_blocks": 16384, 00:05:02.319 "uuid": "2ee14e76-26da-5882-942b-2ada2b2ba422", 00:05:02.319 "assigned_rate_limits": { 00:05:02.319 "rw_ios_per_sec": 0, 00:05:02.319 "rw_mbytes_per_sec": 0, 00:05:02.319 "r_mbytes_per_sec": 0, 00:05:02.319 "w_mbytes_per_sec": 0 00:05:02.319 }, 00:05:02.319 "claimed": false, 00:05:02.319 "zoned": false, 00:05:02.319 "supported_io_types": { 00:05:02.319 "read": true, 00:05:02.319 "write": true, 00:05:02.319 "unmap": true, 00:05:02.319 "flush": true, 00:05:02.319 "reset": true, 00:05:02.319 "nvme_admin": false, 00:05:02.319 "nvme_io": false, 00:05:02.319 "nvme_io_md": false, 00:05:02.319 "write_zeroes": true, 00:05:02.319 "zcopy": true, 00:05:02.319 "get_zone_info": false, 00:05:02.319 "zone_management": false, 00:05:02.319 "zone_append": false, 00:05:02.319 "compare": false, 00:05:02.319 "compare_and_write": false, 00:05:02.319 "abort": true, 00:05:02.319 "seek_hole": false, 00:05:02.319 "seek_data": false, 00:05:02.319 "copy": true, 00:05:02.319 "nvme_iov_md": false 00:05:02.319 }, 00:05:02.319 "memory_domains": [ 00:05:02.319 { 00:05:02.319 "dma_device_id": "system", 00:05:02.319 "dma_device_type": 1 00:05:02.319 }, 00:05:02.319 { 00:05:02.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.319 "dma_device_type": 2 00:05:02.319 } 00:05:02.319 ], 00:05:02.319 "driver_specific": { 00:05:02.319 "passthru": { 00:05:02.319 "name": "Passthru0", 00:05:02.319 "base_bdev_name": "Malloc2" 00:05:02.319 } 00:05:02.319 } 00:05:02.319 } 00:05:02.319 ]' 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.319 ************************************ 00:05:02.319 END TEST rpc_daemon_integrity 00:05:02.319 ************************************ 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.319 00:05:02.319 real 0m0.223s 00:05:02.319 user 0m0.129s 00:05:02.319 sys 0m0.032s 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.319 10:25:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.319 10:25:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:02.319 10:25:36 rpc -- rpc/rpc.sh@84 -- # killprocess 70757 00:05:02.319 10:25:36 rpc -- common/autotest_common.sh@950 -- # '[' -z 70757 ']' 00:05:02.319 10:25:36 rpc -- common/autotest_common.sh@954 -- # kill -0 70757 00:05:02.319 10:25:36 rpc -- common/autotest_common.sh@955 -- # uname 00:05:02.319 10:25:36 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.319 10:25:36 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70757 00:05:02.319 killing process with pid 70757 00:05:02.319 10:25:36 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.319 10:25:36 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.319 10:25:36 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70757' 00:05:02.319 10:25:36 rpc -- common/autotest_common.sh@969 -- # kill 70757 00:05:02.319 10:25:36 rpc -- common/autotest_common.sh@974 -- # wait 70757 00:05:02.581 00:05:02.581 real 0m2.402s 00:05:02.581 user 0m2.792s 00:05:02.581 sys 0m0.645s 00:05:02.581 ************************************ 00:05:02.581 END TEST rpc 00:05:02.581 ************************************ 00:05:02.581 10:25:37 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.581 10:25:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.843 10:25:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:02.843 10:25:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.843 10:25:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.843 10:25:37 -- common/autotest_common.sh@10 -- # set +x 00:05:02.843 ************************************ 00:05:02.843 START TEST skip_rpc 00:05:02.843 ************************************ 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:02.843 * Looking for test storage... 00:05:02.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.843 10:25:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.843 --rc genhtml_branch_coverage=1 00:05:02.843 --rc genhtml_function_coverage=1 00:05:02.843 --rc genhtml_legend=1 00:05:02.843 --rc geninfo_all_blocks=1 00:05:02.843 --rc geninfo_unexecuted_blocks=1 00:05:02.843 00:05:02.843 ' 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.843 --rc genhtml_branch_coverage=1 00:05:02.843 --rc genhtml_function_coverage=1 00:05:02.843 --rc genhtml_legend=1 00:05:02.843 --rc geninfo_all_blocks=1 00:05:02.843 --rc geninfo_unexecuted_blocks=1 00:05:02.843 00:05:02.843 ' 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.843 --rc genhtml_branch_coverage=1 00:05:02.843 --rc genhtml_function_coverage=1 00:05:02.843 --rc genhtml_legend=1 00:05:02.843 --rc geninfo_all_blocks=1 00:05:02.843 --rc geninfo_unexecuted_blocks=1 00:05:02.843 00:05:02.843 ' 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.843 --rc genhtml_branch_coverage=1 00:05:02.843 --rc genhtml_function_coverage=1 00:05:02.843 --rc genhtml_legend=1 00:05:02.843 --rc geninfo_all_blocks=1 00:05:02.843 --rc geninfo_unexecuted_blocks=1 00:05:02.843 00:05:02.843 ' 00:05:02.843 10:25:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.843 10:25:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.843 10:25:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.843 10:25:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.843 ************************************ 00:05:02.843 START TEST skip_rpc 00:05:02.843 ************************************ 00:05:02.843 10:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:02.843 10:25:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70959 00:05:02.843 10:25:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.843 10:25:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.843 10:25:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:02.843 [2024-09-28 10:25:37.613518] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:02.843 [2024-09-28 10:25:37.613711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70959 ] 00:05:03.104 [2024-09-28 10:25:37.746204] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:03.104 [2024-09-28 10:25:37.766702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.104 [2024-09-28 10:25:37.816080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70959 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70959 ']' 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70959 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70959 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.393 killing process with pid 70959 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70959' 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70959 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70959 00:05:08.393 ************************************ 00:05:08.393 END TEST skip_rpc 00:05:08.393 ************************************ 00:05:08.393 00:05:08.393 real 0m5.275s 00:05:08.393 user 0m4.864s 00:05:08.393 sys 0m0.308s 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.393 10:25:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.393 10:25:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:08.393 10:25:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.393 10:25:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.393 10:25:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.393 ************************************ 00:05:08.393 START TEST skip_rpc_with_json 00:05:08.393 ************************************ 00:05:08.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71041 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71041 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 71041 ']' 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.393 10:25:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.393 [2024-09-28 10:25:42.916793] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:08.393 [2024-09-28 10:25:42.917027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71041 ] 00:05:08.393 [2024-09-28 10:25:43.039743] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:08.393 [2024-09-28 10:25:43.056545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.394 [2024-09-28 10:25:43.088702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.328 [2024-09-28 10:25:43.761636] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:09.328 request: 00:05:09.328 { 00:05:09.328 "trtype": "tcp", 00:05:09.328 "method": "nvmf_get_transports", 00:05:09.328 "req_id": 1 00:05:09.328 } 00:05:09.328 Got JSON-RPC error response 00:05:09.328 response: 00:05:09.328 { 00:05:09.328 "code": -19, 00:05:09.328 "message": "No such device" 00:05:09.328 } 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.328 [2024-09-28 10:25:43.769732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.328 10:25:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.328 { 00:05:09.328 "subsystems": [ 00:05:09.328 { 00:05:09.328 "subsystem": "fsdev", 00:05:09.328 "config": [ 00:05:09.328 { 00:05:09.328 "method": "fsdev_set_opts", 00:05:09.328 "params": { 00:05:09.328 "fsdev_io_pool_size": 65535, 00:05:09.328 "fsdev_io_cache_size": 256 00:05:09.328 } 00:05:09.328 } 00:05:09.328 ] 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "subsystem": "keyring", 00:05:09.328 "config": [] 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "subsystem": "iobuf", 00:05:09.328 "config": [ 00:05:09.328 { 00:05:09.328 "method": "iobuf_set_options", 00:05:09.328 "params": { 00:05:09.328 "small_pool_count": 8192, 00:05:09.328 "large_pool_count": 1024, 00:05:09.328 "small_bufsize": 8192, 00:05:09.328 "large_bufsize": 135168 00:05:09.328 } 00:05:09.328 } 00:05:09.328 ] 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "subsystem": "sock", 00:05:09.328 "config": [ 00:05:09.328 { 00:05:09.328 "method": "sock_set_default_impl", 00:05:09.328 "params": { 00:05:09.328 "impl_name": "posix" 00:05:09.328 } 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "method": "sock_impl_set_options", 00:05:09.328 "params": { 00:05:09.328 "impl_name": "ssl", 00:05:09.328 "recv_buf_size": 4096, 00:05:09.328 "send_buf_size": 4096, 00:05:09.328 "enable_recv_pipe": true, 00:05:09.328 "enable_quickack": false, 00:05:09.328 "enable_placement_id": 0, 00:05:09.328 "enable_zerocopy_send_server": true, 00:05:09.328 "enable_zerocopy_send_client": false, 00:05:09.328 "zerocopy_threshold": 0, 00:05:09.328 "tls_version": 0, 00:05:09.328 "enable_ktls": false 00:05:09.328 } 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "method": "sock_impl_set_options", 00:05:09.328 "params": { 00:05:09.328 "impl_name": "posix", 00:05:09.328 "recv_buf_size": 2097152, 00:05:09.328 "send_buf_size": 2097152, 00:05:09.328 "enable_recv_pipe": true, 00:05:09.328 "enable_quickack": false, 00:05:09.328 "enable_placement_id": 0, 00:05:09.328 "enable_zerocopy_send_server": true, 00:05:09.328 "enable_zerocopy_send_client": false, 00:05:09.328 "zerocopy_threshold": 0, 00:05:09.328 "tls_version": 0, 00:05:09.328 "enable_ktls": false 00:05:09.328 } 00:05:09.328 } 00:05:09.328 ] 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "subsystem": "vmd", 00:05:09.328 "config": [] 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "subsystem": "accel", 00:05:09.328 "config": [ 00:05:09.328 { 00:05:09.328 "method": "accel_set_options", 00:05:09.328 "params": { 00:05:09.328 "small_cache_size": 128, 00:05:09.328 "large_cache_size": 16, 00:05:09.328 "task_count": 2048, 00:05:09.328 "sequence_count": 2048, 00:05:09.328 "buf_count": 2048 00:05:09.328 } 00:05:09.328 } 00:05:09.328 ] 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "subsystem": "bdev", 00:05:09.328 "config": [ 00:05:09.328 { 00:05:09.328 "method": "bdev_set_options", 00:05:09.328 "params": { 00:05:09.328 "bdev_io_pool_size": 65535, 00:05:09.328 "bdev_io_cache_size": 256, 00:05:09.328 "bdev_auto_examine": true, 00:05:09.328 "iobuf_small_cache_size": 128, 00:05:09.328 "iobuf_large_cache_size": 16 00:05:09.328 } 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "method": "bdev_raid_set_options", 00:05:09.328 "params": { 00:05:09.328 "process_window_size_kb": 1024, 00:05:09.328 "process_max_bandwidth_mb_sec": 0 00:05:09.328 } 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "method": "bdev_iscsi_set_options", 00:05:09.328 "params": { 00:05:09.328 "timeout_sec": 30 00:05:09.328 } 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "method": "bdev_nvme_set_options", 00:05:09.328 "params": { 00:05:09.328 "action_on_timeout": "none", 00:05:09.328 "timeout_us": 0, 00:05:09.328 "timeout_admin_us": 0, 00:05:09.328 "keep_alive_timeout_ms": 10000, 00:05:09.328 "arbitration_burst": 0, 00:05:09.328 "low_priority_weight": 0, 00:05:09.328 "medium_priority_weight": 0, 00:05:09.328 "high_priority_weight": 0, 00:05:09.328 "nvme_adminq_poll_period_us": 10000, 00:05:09.328 "nvme_ioq_poll_period_us": 0, 00:05:09.328 "io_queue_requests": 0, 00:05:09.328 "delay_cmd_submit": true, 00:05:09.328 "transport_retry_count": 4, 00:05:09.328 "bdev_retry_count": 3, 00:05:09.328 "transport_ack_timeout": 0, 00:05:09.328 "ctrlr_loss_timeout_sec": 0, 00:05:09.328 "reconnect_delay_sec": 0, 00:05:09.328 "fast_io_fail_timeout_sec": 0, 00:05:09.328 "disable_auto_failback": false, 00:05:09.328 "generate_uuids": false, 00:05:09.328 "transport_tos": 0, 00:05:09.328 "nvme_error_stat": false, 00:05:09.328 "rdma_srq_size": 0, 00:05:09.328 "io_path_stat": false, 00:05:09.328 "allow_accel_sequence": false, 00:05:09.328 "rdma_max_cq_size": 0, 00:05:09.328 "rdma_cm_event_timeout_ms": 0, 00:05:09.328 "dhchap_digests": [ 00:05:09.328 "sha256", 00:05:09.328 "sha384", 00:05:09.328 "sha512" 00:05:09.328 ], 00:05:09.328 "dhchap_dhgroups": [ 00:05:09.328 "null", 00:05:09.328 "ffdhe2048", 00:05:09.328 "ffdhe3072", 00:05:09.328 "ffdhe4096", 00:05:09.328 "ffdhe6144", 00:05:09.328 "ffdhe8192" 00:05:09.328 ] 00:05:09.328 } 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "method": "bdev_nvme_set_hotplug", 00:05:09.328 "params": { 00:05:09.328 "period_us": 100000, 00:05:09.328 "enable": false 00:05:09.328 } 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "method": "bdev_wait_for_examine" 00:05:09.328 } 00:05:09.328 ] 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "subsystem": "scsi", 00:05:09.328 "config": null 00:05:09.328 }, 00:05:09.328 { 00:05:09.328 "subsystem": "scheduler", 00:05:09.328 "config": [ 00:05:09.328 { 00:05:09.328 "method": "framework_set_scheduler", 00:05:09.328 "params": { 00:05:09.328 "name": "static" 00:05:09.328 } 00:05:09.329 } 00:05:09.329 ] 00:05:09.329 }, 00:05:09.329 { 00:05:09.329 "subsystem": "vhost_scsi", 00:05:09.329 "config": [] 00:05:09.329 }, 00:05:09.329 { 00:05:09.329 "subsystem": "vhost_blk", 00:05:09.329 "config": [] 00:05:09.329 }, 00:05:09.329 { 00:05:09.329 "subsystem": "ublk", 00:05:09.329 "config": [] 00:05:09.329 }, 00:05:09.329 { 00:05:09.329 "subsystem": "nbd", 00:05:09.329 "config": [] 00:05:09.329 }, 00:05:09.329 { 00:05:09.329 "subsystem": "nvmf", 00:05:09.329 "config": [ 00:05:09.329 { 00:05:09.329 "method": "nvmf_set_config", 00:05:09.329 "params": { 00:05:09.329 "discovery_filter": "match_any", 00:05:09.329 "admin_cmd_passthru": { 00:05:09.329 "identify_ctrlr": false 00:05:09.329 }, 00:05:09.329 "dhchap_digests": [ 00:05:09.329 "sha256", 00:05:09.329 "sha384", 00:05:09.329 "sha512" 00:05:09.329 ], 00:05:09.329 "dhchap_dhgroups": [ 00:05:09.329 "null", 00:05:09.329 "ffdhe2048", 00:05:09.329 "ffdhe3072", 00:05:09.329 "ffdhe4096", 00:05:09.329 "ffdhe6144", 00:05:09.329 "ffdhe8192" 00:05:09.329 ] 00:05:09.329 } 00:05:09.329 }, 00:05:09.329 { 00:05:09.329 "method": "nvmf_set_max_subsystems", 00:05:09.329 "params": { 00:05:09.329 "max_subsystems": 1024 00:05:09.329 } 00:05:09.329 }, 00:05:09.329 { 00:05:09.329 "method": "nvmf_set_crdt", 00:05:09.329 "params": { 00:05:09.329 "crdt1": 0, 00:05:09.329 "crdt2": 0, 00:05:09.329 "crdt3": 0 00:05:09.329 } 00:05:09.329 }, 00:05:09.329 { 00:05:09.329 "method": "nvmf_create_transport", 00:05:09.329 "params": { 00:05:09.329 "trtype": "TCP", 00:05:09.329 "max_queue_depth": 128, 00:05:09.329 "max_io_qpairs_per_ctrlr": 127, 00:05:09.329 "in_capsule_data_size": 4096, 00:05:09.329 "max_io_size": 131072, 00:05:09.329 "io_unit_size": 131072, 00:05:09.329 "max_aq_depth": 128, 00:05:09.329 "num_shared_buffers": 511, 00:05:09.329 "buf_cache_size": 4294967295, 00:05:09.329 "dif_insert_or_strip": false, 00:05:09.329 "zcopy": false, 00:05:09.329 "c2h_success": true, 00:05:09.329 "sock_priority": 0, 00:05:09.329 "abort_timeout_sec": 1, 00:05:09.329 "ack_timeout": 0, 00:05:09.329 "data_wr_pool_size": 0 00:05:09.329 } 00:05:09.329 } 00:05:09.329 ] 00:05:09.329 }, 00:05:09.329 { 00:05:09.329 "subsystem": "iscsi", 00:05:09.329 "config": [ 00:05:09.329 { 00:05:09.329 "method": "iscsi_set_options", 00:05:09.329 "params": { 00:05:09.329 "node_base": "iqn.2016-06.io.spdk", 00:05:09.329 "max_sessions": 128, 00:05:09.329 "max_connections_per_session": 2, 00:05:09.329 "max_queue_depth": 64, 00:05:09.329 "default_time2wait": 2, 00:05:09.329 "default_time2retain": 20, 00:05:09.329 "first_burst_length": 8192, 00:05:09.329 "immediate_data": true, 00:05:09.329 "allow_duplicated_isid": false, 00:05:09.329 "error_recovery_level": 0, 00:05:09.329 "nop_timeout": 60, 00:05:09.329 "nop_in_interval": 30, 00:05:09.329 "disable_chap": false, 00:05:09.329 "require_chap": false, 00:05:09.329 "mutual_chap": false, 00:05:09.329 "chap_group": 0, 00:05:09.329 "max_large_datain_per_connection": 64, 00:05:09.329 "max_r2t_per_connection": 4, 00:05:09.329 "pdu_pool_size": 36864, 00:05:09.329 "immediate_data_pool_size": 16384, 00:05:09.329 "data_out_pool_size": 2048 00:05:09.329 } 00:05:09.329 } 00:05:09.329 ] 00:05:09.329 } 00:05:09.329 ] 00:05:09.329 } 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71041 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71041 ']' 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71041 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71041 00:05:09.329 killing process with pid 71041 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71041' 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71041 00:05:09.329 10:25:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71041 00:05:09.589 10:25:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71069 00:05:09.589 10:25:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:09.589 10:25:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71069 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71069 ']' 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71069 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71069 00:05:14.852 killing process with pid 71069 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71069' 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71069 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71069 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.852 ************************************ 00:05:14.852 END TEST skip_rpc_with_json 00:05:14.852 ************************************ 00:05:14.852 00:05:14.852 real 0m6.619s 00:05:14.852 user 0m6.295s 00:05:14.852 sys 0m0.540s 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.852 10:25:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:14.852 10:25:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.852 10:25:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.852 10:25:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.852 ************************************ 00:05:14.852 START TEST skip_rpc_with_delay 00:05:14.852 ************************************ 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:14.852 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.852 [2024-09-28 10:25:49.599621] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:14.852 [2024-09-28 10:25:49.599756] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:15.114 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:15.114 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:15.114 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:15.114 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:15.114 00:05:15.114 real 0m0.121s 00:05:15.114 user 0m0.058s 00:05:15.114 sys 0m0.061s 00:05:15.114 ************************************ 00:05:15.114 END TEST skip_rpc_with_delay 00:05:15.114 ************************************ 00:05:15.114 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.114 10:25:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:15.114 10:25:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:15.114 10:25:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:15.114 10:25:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:15.114 10:25:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.114 10:25:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.114 10:25:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.114 ************************************ 00:05:15.114 START TEST exit_on_failed_rpc_init 00:05:15.114 ************************************ 00:05:15.114 10:25:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:15.114 10:25:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71175 00:05:15.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.114 10:25:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71175 00:05:15.114 10:25:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 71175 ']' 00:05:15.114 10:25:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.114 10:25:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.114 10:25:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.115 10:25:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.115 10:25:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.115 10:25:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.115 [2024-09-28 10:25:49.781093] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:15.115 [2024-09-28 10:25:49.781228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71175 ] 00:05:15.376 [2024-09-28 10:25:49.914003] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:15.376 [2024-09-28 10:25:49.935378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.376 [2024-09-28 10:25:49.986684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:15.946 10:25:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.946 [2024-09-28 10:25:50.721227] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:15.946 [2024-09-28 10:25:50.721363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71193 ] 00:05:16.207 [2024-09-28 10:25:50.845378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:16.207 [2024-09-28 10:25:50.865062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.207 [2024-09-28 10:25:50.919039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.207 [2024-09-28 10:25:50.919409] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:16.207 [2024-09-28 10:25:50.919436] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:16.207 [2024-09-28 10:25:50.919449] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71175 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 71175 ']' 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 71175 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71175 00:05:16.469 killing process with pid 71175 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71175' 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 71175 00:05:16.469 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 71175 00:05:16.730 00:05:16.730 real 0m1.667s 00:05:16.730 user 0m1.830s 00:05:16.730 sys 0m0.465s 00:05:16.730 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.730 ************************************ 00:05:16.730 END TEST exit_on_failed_rpc_init 00:05:16.730 ************************************ 00:05:16.730 10:25:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.730 10:25:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.730 00:05:16.730 real 0m14.045s 00:05:16.730 user 0m13.181s 00:05:16.730 sys 0m1.559s 00:05:16.730 ************************************ 00:05:16.730 END TEST skip_rpc 00:05:16.730 ************************************ 00:05:16.730 10:25:51 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.730 10:25:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.730 10:25:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:16.730 10:25:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.730 10:25:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.730 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:05:16.730 ************************************ 00:05:16.730 START TEST rpc_client 00:05:16.730 ************************************ 00:05:16.730 10:25:51 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:16.992 * Looking for test storage... 00:05:16.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.992 10:25:51 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.992 --rc genhtml_branch_coverage=1 00:05:16.992 --rc genhtml_function_coverage=1 00:05:16.992 --rc genhtml_legend=1 00:05:16.992 --rc geninfo_all_blocks=1 00:05:16.992 --rc geninfo_unexecuted_blocks=1 00:05:16.992 00:05:16.992 ' 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.992 --rc genhtml_branch_coverage=1 00:05:16.992 --rc genhtml_function_coverage=1 00:05:16.992 --rc genhtml_legend=1 00:05:16.992 --rc geninfo_all_blocks=1 00:05:16.992 --rc geninfo_unexecuted_blocks=1 00:05:16.992 00:05:16.992 ' 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.992 --rc genhtml_branch_coverage=1 00:05:16.992 --rc genhtml_function_coverage=1 00:05:16.992 --rc genhtml_legend=1 00:05:16.992 --rc geninfo_all_blocks=1 00:05:16.992 --rc geninfo_unexecuted_blocks=1 00:05:16.992 00:05:16.992 ' 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.992 --rc genhtml_branch_coverage=1 00:05:16.992 --rc genhtml_function_coverage=1 00:05:16.992 --rc genhtml_legend=1 00:05:16.992 --rc geninfo_all_blocks=1 00:05:16.992 --rc geninfo_unexecuted_blocks=1 00:05:16.992 00:05:16.992 ' 00:05:16.992 10:25:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:16.992 OK 00:05:16.992 10:25:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:16.992 00:05:16.992 real 0m0.202s 00:05:16.992 user 0m0.104s 00:05:16.992 sys 0m0.105s 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.992 ************************************ 00:05:16.992 10:25:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:16.992 END TEST rpc_client 00:05:16.992 ************************************ 00:05:16.992 10:25:51 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:16.992 10:25:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.992 10:25:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.992 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:05:16.992 ************************************ 00:05:16.992 START TEST json_config 00:05:16.992 ************************************ 00:05:16.992 10:25:51 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:17.255 10:25:51 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:17.255 10:25:51 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:17.255 10:25:51 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:17.255 10:25:51 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:17.255 10:25:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.255 10:25:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.255 10:25:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.255 10:25:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.255 10:25:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.255 10:25:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.255 10:25:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.255 10:25:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.255 10:25:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.255 10:25:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.255 10:25:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.255 10:25:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:17.255 10:25:51 json_config -- scripts/common.sh@345 -- # : 1 00:05:17.255 10:25:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.255 10:25:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.255 10:25:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:17.255 10:25:51 json_config -- scripts/common.sh@353 -- # local d=1 00:05:17.255 10:25:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.255 10:25:51 json_config -- scripts/common.sh@355 -- # echo 1 00:05:17.255 10:25:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.255 10:25:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:17.255 10:25:51 json_config -- scripts/common.sh@353 -- # local d=2 00:05:17.255 10:25:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.255 10:25:51 json_config -- scripts/common.sh@355 -- # echo 2 00:05:17.255 10:25:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.255 10:25:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.255 10:25:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.255 10:25:51 json_config -- scripts/common.sh@368 -- # return 0 00:05:17.255 10:25:51 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.255 10:25:51 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.255 --rc genhtml_branch_coverage=1 00:05:17.255 --rc genhtml_function_coverage=1 00:05:17.255 --rc genhtml_legend=1 00:05:17.255 --rc geninfo_all_blocks=1 00:05:17.255 --rc geninfo_unexecuted_blocks=1 00:05:17.255 00:05:17.255 ' 00:05:17.255 10:25:51 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.255 --rc genhtml_branch_coverage=1 00:05:17.255 --rc genhtml_function_coverage=1 00:05:17.255 --rc genhtml_legend=1 00:05:17.255 --rc geninfo_all_blocks=1 00:05:17.255 --rc geninfo_unexecuted_blocks=1 00:05:17.255 00:05:17.255 ' 00:05:17.255 10:25:51 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.255 --rc genhtml_branch_coverage=1 00:05:17.255 --rc genhtml_function_coverage=1 00:05:17.255 --rc genhtml_legend=1 00:05:17.255 --rc geninfo_all_blocks=1 00:05:17.255 --rc geninfo_unexecuted_blocks=1 00:05:17.255 00:05:17.255 ' 00:05:17.255 10:25:51 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.255 --rc genhtml_branch_coverage=1 00:05:17.255 --rc genhtml_function_coverage=1 00:05:17.255 --rc genhtml_legend=1 00:05:17.255 --rc geninfo_all_blocks=1 00:05:17.255 --rc geninfo_unexecuted_blocks=1 00:05:17.255 00:05:17.255 ' 00:05:17.255 10:25:51 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3b890a9-2d28-4a32-bd03-591ea31d75ee 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d3b890a9-2d28-4a32-bd03-591ea31d75ee 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.255 10:25:51 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:17.255 10:25:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.255 10:25:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.255 10:25:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.255 10:25:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.256 10:25:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.256 10:25:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.256 10:25:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.256 10:25:51 json_config -- paths/export.sh@5 -- # export PATH 00:05:17.256 10:25:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.256 10:25:51 json_config -- nvmf/common.sh@51 -- # : 0 00:05:17.256 10:25:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.256 10:25:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.256 10:25:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.256 10:25:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.256 10:25:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.256 10:25:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.256 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.256 10:25:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.256 10:25:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.256 10:25:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.256 10:25:51 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:17.256 10:25:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:17.256 10:25:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:17.256 10:25:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:17.256 10:25:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:17.256 10:25:51 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:17.256 WARNING: No tests are enabled so not running JSON configuration tests 00:05:17.256 10:25:51 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:17.256 00:05:17.256 real 0m0.148s 00:05:17.256 user 0m0.078s 00:05:17.256 sys 0m0.069s 00:05:17.256 10:25:51 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.256 10:25:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.256 ************************************ 00:05:17.256 END TEST json_config 00:05:17.256 ************************************ 00:05:17.256 10:25:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:17.256 10:25:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.256 10:25:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.256 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.256 ************************************ 00:05:17.256 START TEST json_config_extra_key 00:05:17.256 ************************************ 00:05:17.256 10:25:51 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:17.256 10:25:51 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:17.256 10:25:51 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:17.256 10:25:51 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:17.518 10:25:52 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.518 10:25:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:17.518 10:25:52 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.518 10:25:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:17.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.518 --rc genhtml_branch_coverage=1 00:05:17.518 --rc genhtml_function_coverage=1 00:05:17.518 --rc genhtml_legend=1 00:05:17.518 --rc geninfo_all_blocks=1 00:05:17.518 --rc geninfo_unexecuted_blocks=1 00:05:17.518 00:05:17.518 ' 00:05:17.518 10:25:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:17.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.518 --rc genhtml_branch_coverage=1 00:05:17.518 --rc genhtml_function_coverage=1 00:05:17.518 --rc genhtml_legend=1 00:05:17.518 --rc geninfo_all_blocks=1 00:05:17.518 --rc geninfo_unexecuted_blocks=1 00:05:17.518 00:05:17.518 ' 00:05:17.518 10:25:52 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:17.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.518 --rc genhtml_branch_coverage=1 00:05:17.518 --rc genhtml_function_coverage=1 00:05:17.518 --rc genhtml_legend=1 00:05:17.518 --rc geninfo_all_blocks=1 00:05:17.518 --rc geninfo_unexecuted_blocks=1 00:05:17.518 00:05:17.518 ' 00:05:17.518 10:25:52 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:17.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.518 --rc genhtml_branch_coverage=1 00:05:17.518 --rc genhtml_function_coverage=1 00:05:17.518 --rc genhtml_legend=1 00:05:17.518 --rc geninfo_all_blocks=1 00:05:17.518 --rc geninfo_unexecuted_blocks=1 00:05:17.518 00:05:17.518 ' 00:05:17.518 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:17.518 10:25:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:17.518 10:25:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.518 10:25:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.518 10:25:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.518 10:25:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.518 10:25:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.518 10:25:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.518 10:25:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.518 10:25:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3b890a9-2d28-4a32-bd03-591ea31d75ee 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d3b890a9-2d28-4a32-bd03-591ea31d75ee 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:17.519 10:25:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.519 10:25:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.519 10:25:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.519 10:25:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.519 10:25:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.519 10:25:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.519 10:25:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.519 10:25:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:17.519 10:25:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.519 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.519 10:25:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.519 INFO: launching applications... 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:17.519 10:25:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71376 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.519 Waiting for target to run... 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71376 /var/tmp/spdk_tgt.sock 00:05:17.519 10:25:52 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 71376 ']' 00:05:17.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.519 10:25:52 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:17.519 10:25:52 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.519 10:25:52 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.519 10:25:52 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.519 10:25:52 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.519 10:25:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.519 [2024-09-28 10:25:52.182770] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:17.519 [2024-09-28 10:25:52.183175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71376 ] 00:05:18.094 [2024-09-28 10:25:52.569166] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:18.094 [2024-09-28 10:25:52.585214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.094 [2024-09-28 10:25:52.613933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.356 00:05:18.356 INFO: shutting down applications... 00:05:18.356 10:25:53 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.356 10:25:53 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:18.356 10:25:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:18.356 10:25:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:18.356 10:25:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:18.356 10:25:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:18.356 10:25:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.356 10:25:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71376 ]] 00:05:18.356 10:25:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71376 00:05:18.356 10:25:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.356 10:25:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.356 10:25:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71376 00:05:18.356 10:25:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.927 10:25:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.927 10:25:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.927 10:25:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71376 00:05:18.927 SPDK target shutdown done 00:05:18.927 Success 00:05:18.927 10:25:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:18.927 10:25:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:18.927 10:25:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:18.927 10:25:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:18.927 10:25:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:18.927 00:05:18.927 real 0m1.596s 00:05:18.927 user 0m1.189s 00:05:18.927 sys 0m0.484s 00:05:18.927 ************************************ 00:05:18.927 END TEST json_config_extra_key 00:05:18.927 ************************************ 00:05:18.927 10:25:53 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.927 10:25:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.927 10:25:53 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:18.927 10:25:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.927 10:25:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.927 10:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:18.927 ************************************ 00:05:18.927 START TEST alias_rpc 00:05:18.927 ************************************ 00:05:18.927 10:25:53 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:18.927 * Looking for test storage... 00:05:18.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:18.927 10:25:53 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.927 10:25:53 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.927 10:25:53 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:19.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.189 10:25:53 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.189 --rc genhtml_branch_coverage=1 00:05:19.189 --rc genhtml_function_coverage=1 00:05:19.189 --rc genhtml_legend=1 00:05:19.189 --rc geninfo_all_blocks=1 00:05:19.189 --rc geninfo_unexecuted_blocks=1 00:05:19.189 00:05:19.189 ' 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.189 --rc genhtml_branch_coverage=1 00:05:19.189 --rc genhtml_function_coverage=1 00:05:19.189 --rc genhtml_legend=1 00:05:19.189 --rc geninfo_all_blocks=1 00:05:19.189 --rc geninfo_unexecuted_blocks=1 00:05:19.189 00:05:19.189 ' 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.189 --rc genhtml_branch_coverage=1 00:05:19.189 --rc genhtml_function_coverage=1 00:05:19.189 --rc genhtml_legend=1 00:05:19.189 --rc geninfo_all_blocks=1 00:05:19.189 --rc geninfo_unexecuted_blocks=1 00:05:19.189 00:05:19.189 ' 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.189 --rc genhtml_branch_coverage=1 00:05:19.189 --rc genhtml_function_coverage=1 00:05:19.189 --rc genhtml_legend=1 00:05:19.189 --rc geninfo_all_blocks=1 00:05:19.189 --rc geninfo_unexecuted_blocks=1 00:05:19.189 00:05:19.189 ' 00:05:19.189 10:25:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:19.189 10:25:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71449 00:05:19.189 10:25:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71449 00:05:19.189 10:25:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 71449 ']' 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.189 10:25:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.189 [2024-09-28 10:25:53.816281] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:19.189 [2024-09-28 10:25:53.816612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71449 ] 00:05:19.189 [2024-09-28 10:25:53.945825] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:19.451 [2024-09-28 10:25:53.966006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.451 [2024-09-28 10:25:54.003003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.023 10:25:54 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.023 10:25:54 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:20.023 10:25:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:20.285 10:25:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71449 00:05:20.285 10:25:54 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 71449 ']' 00:05:20.285 10:25:54 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 71449 00:05:20.285 10:25:54 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:20.285 10:25:54 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.285 10:25:54 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71449 00:05:20.285 10:25:54 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.285 10:25:54 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.285 10:25:54 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71449' 00:05:20.285 killing process with pid 71449 00:05:20.285 10:25:54 alias_rpc -- common/autotest_common.sh@969 -- # kill 71449 00:05:20.285 10:25:54 alias_rpc -- common/autotest_common.sh@974 -- # wait 71449 00:05:20.546 00:05:20.546 real 0m1.641s 00:05:20.546 user 0m1.724s 00:05:20.546 sys 0m0.431s 00:05:20.546 10:25:55 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.546 ************************************ 00:05:20.546 END TEST alias_rpc 00:05:20.546 ************************************ 00:05:20.546 10:25:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.546 10:25:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:20.546 10:25:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:20.546 10:25:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.546 10:25:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.546 10:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:20.546 ************************************ 00:05:20.546 START TEST spdkcli_tcp 00:05:20.546 ************************************ 00:05:20.546 10:25:55 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:20.808 * Looking for test storage... 00:05:20.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.808 10:25:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:20.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.808 --rc genhtml_branch_coverage=1 00:05:20.808 --rc genhtml_function_coverage=1 00:05:20.808 --rc genhtml_legend=1 00:05:20.808 --rc geninfo_all_blocks=1 00:05:20.808 --rc geninfo_unexecuted_blocks=1 00:05:20.808 00:05:20.808 ' 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:20.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.808 --rc genhtml_branch_coverage=1 00:05:20.808 --rc genhtml_function_coverage=1 00:05:20.808 --rc genhtml_legend=1 00:05:20.808 --rc geninfo_all_blocks=1 00:05:20.808 --rc geninfo_unexecuted_blocks=1 00:05:20.808 00:05:20.808 ' 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:20.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.808 --rc genhtml_branch_coverage=1 00:05:20.808 --rc genhtml_function_coverage=1 00:05:20.808 --rc genhtml_legend=1 00:05:20.808 --rc geninfo_all_blocks=1 00:05:20.808 --rc geninfo_unexecuted_blocks=1 00:05:20.808 00:05:20.808 ' 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:20.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.808 --rc genhtml_branch_coverage=1 00:05:20.808 --rc genhtml_function_coverage=1 00:05:20.808 --rc genhtml_legend=1 00:05:20.808 --rc geninfo_all_blocks=1 00:05:20.808 --rc geninfo_unexecuted_blocks=1 00:05:20.808 00:05:20.808 ' 00:05:20.808 10:25:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:20.808 10:25:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:20.808 10:25:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:20.808 10:25:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:20.808 10:25:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:20.808 10:25:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:20.808 10:25:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.808 10:25:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71534 00:05:20.808 10:25:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71534 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 71534 ']' 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.808 10:25:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.808 10:25:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.808 [2024-09-28 10:25:55.531928] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:20.808 [2024-09-28 10:25:55.532101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71534 ] 00:05:21.069 [2024-09-28 10:25:55.666156] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:21.069 [2024-09-28 10:25:55.683876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.069 [2024-09-28 10:25:55.735159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.069 [2024-09-28 10:25:55.735244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.634 10:25:56 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.634 10:25:56 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:21.634 10:25:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71546 00:05:21.634 10:25:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:21.634 10:25:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:21.893 [ 00:05:21.893 "bdev_malloc_delete", 00:05:21.893 "bdev_malloc_create", 00:05:21.893 "bdev_null_resize", 00:05:21.893 "bdev_null_delete", 00:05:21.893 "bdev_null_create", 00:05:21.893 "bdev_nvme_cuse_unregister", 00:05:21.893 "bdev_nvme_cuse_register", 00:05:21.893 "bdev_opal_new_user", 00:05:21.893 "bdev_opal_set_lock_state", 00:05:21.893 "bdev_opal_delete", 00:05:21.893 "bdev_opal_get_info", 00:05:21.893 "bdev_opal_create", 00:05:21.893 "bdev_nvme_opal_revert", 00:05:21.893 "bdev_nvme_opal_init", 00:05:21.893 "bdev_nvme_send_cmd", 00:05:21.893 "bdev_nvme_set_keys", 00:05:21.893 "bdev_nvme_get_path_iostat", 00:05:21.893 "bdev_nvme_get_mdns_discovery_info", 00:05:21.893 "bdev_nvme_stop_mdns_discovery", 00:05:21.893 "bdev_nvme_start_mdns_discovery", 00:05:21.893 "bdev_nvme_set_multipath_policy", 00:05:21.893 "bdev_nvme_set_preferred_path", 00:05:21.893 "bdev_nvme_get_io_paths", 00:05:21.893 "bdev_nvme_remove_error_injection", 00:05:21.893 "bdev_nvme_add_error_injection", 00:05:21.893 "bdev_nvme_get_discovery_info", 00:05:21.893 "bdev_nvme_stop_discovery", 00:05:21.893 "bdev_nvme_start_discovery", 00:05:21.893 "bdev_nvme_get_controller_health_info", 00:05:21.893 "bdev_nvme_disable_controller", 00:05:21.893 "bdev_nvme_enable_controller", 00:05:21.893 "bdev_nvme_reset_controller", 00:05:21.893 "bdev_nvme_get_transport_statistics", 00:05:21.893 "bdev_nvme_apply_firmware", 00:05:21.893 "bdev_nvme_detach_controller", 00:05:21.893 "bdev_nvme_get_controllers", 00:05:21.893 "bdev_nvme_attach_controller", 00:05:21.893 "bdev_nvme_set_hotplug", 00:05:21.893 "bdev_nvme_set_options", 00:05:21.893 "bdev_passthru_delete", 00:05:21.893 "bdev_passthru_create", 00:05:21.893 "bdev_lvol_set_parent_bdev", 00:05:21.893 "bdev_lvol_set_parent", 00:05:21.893 "bdev_lvol_check_shallow_copy", 00:05:21.893 "bdev_lvol_start_shallow_copy", 00:05:21.893 "bdev_lvol_grow_lvstore", 00:05:21.893 "bdev_lvol_get_lvols", 00:05:21.893 "bdev_lvol_get_lvstores", 00:05:21.893 "bdev_lvol_delete", 00:05:21.893 "bdev_lvol_set_read_only", 00:05:21.893 "bdev_lvol_resize", 00:05:21.893 "bdev_lvol_decouple_parent", 00:05:21.893 "bdev_lvol_inflate", 00:05:21.893 "bdev_lvol_rename", 00:05:21.893 "bdev_lvol_clone_bdev", 00:05:21.893 "bdev_lvol_clone", 00:05:21.893 "bdev_lvol_snapshot", 00:05:21.893 "bdev_lvol_create", 00:05:21.893 "bdev_lvol_delete_lvstore", 00:05:21.893 "bdev_lvol_rename_lvstore", 00:05:21.893 "bdev_lvol_create_lvstore", 00:05:21.893 "bdev_raid_set_options", 00:05:21.893 "bdev_raid_remove_base_bdev", 00:05:21.893 "bdev_raid_add_base_bdev", 00:05:21.893 "bdev_raid_delete", 00:05:21.893 "bdev_raid_create", 00:05:21.893 "bdev_raid_get_bdevs", 00:05:21.893 "bdev_error_inject_error", 00:05:21.893 "bdev_error_delete", 00:05:21.893 "bdev_error_create", 00:05:21.893 "bdev_split_delete", 00:05:21.893 "bdev_split_create", 00:05:21.893 "bdev_delay_delete", 00:05:21.893 "bdev_delay_create", 00:05:21.893 "bdev_delay_update_latency", 00:05:21.893 "bdev_zone_block_delete", 00:05:21.893 "bdev_zone_block_create", 00:05:21.893 "blobfs_create", 00:05:21.893 "blobfs_detect", 00:05:21.893 "blobfs_set_cache_size", 00:05:21.893 "bdev_xnvme_delete", 00:05:21.893 "bdev_xnvme_create", 00:05:21.893 "bdev_aio_delete", 00:05:21.893 "bdev_aio_rescan", 00:05:21.893 "bdev_aio_create", 00:05:21.893 "bdev_ftl_set_property", 00:05:21.893 "bdev_ftl_get_properties", 00:05:21.893 "bdev_ftl_get_stats", 00:05:21.893 "bdev_ftl_unmap", 00:05:21.893 "bdev_ftl_unload", 00:05:21.893 "bdev_ftl_delete", 00:05:21.893 "bdev_ftl_load", 00:05:21.893 "bdev_ftl_create", 00:05:21.893 "bdev_virtio_attach_controller", 00:05:21.893 "bdev_virtio_scsi_get_devices", 00:05:21.893 "bdev_virtio_detach_controller", 00:05:21.893 "bdev_virtio_blk_set_hotplug", 00:05:21.893 "bdev_iscsi_delete", 00:05:21.893 "bdev_iscsi_create", 00:05:21.893 "bdev_iscsi_set_options", 00:05:21.893 "accel_error_inject_error", 00:05:21.893 "ioat_scan_accel_module", 00:05:21.893 "dsa_scan_accel_module", 00:05:21.893 "iaa_scan_accel_module", 00:05:21.893 "keyring_file_remove_key", 00:05:21.893 "keyring_file_add_key", 00:05:21.893 "keyring_linux_set_options", 00:05:21.893 "fsdev_aio_delete", 00:05:21.893 "fsdev_aio_create", 00:05:21.893 "iscsi_get_histogram", 00:05:21.893 "iscsi_enable_histogram", 00:05:21.893 "iscsi_set_options", 00:05:21.893 "iscsi_get_auth_groups", 00:05:21.893 "iscsi_auth_group_remove_secret", 00:05:21.893 "iscsi_auth_group_add_secret", 00:05:21.893 "iscsi_delete_auth_group", 00:05:21.893 "iscsi_create_auth_group", 00:05:21.893 "iscsi_set_discovery_auth", 00:05:21.893 "iscsi_get_options", 00:05:21.893 "iscsi_target_node_request_logout", 00:05:21.893 "iscsi_target_node_set_redirect", 00:05:21.893 "iscsi_target_node_set_auth", 00:05:21.893 "iscsi_target_node_add_lun", 00:05:21.893 "iscsi_get_stats", 00:05:21.893 "iscsi_get_connections", 00:05:21.893 "iscsi_portal_group_set_auth", 00:05:21.893 "iscsi_start_portal_group", 00:05:21.893 "iscsi_delete_portal_group", 00:05:21.894 "iscsi_create_portal_group", 00:05:21.894 "iscsi_get_portal_groups", 00:05:21.894 "iscsi_delete_target_node", 00:05:21.894 "iscsi_target_node_remove_pg_ig_maps", 00:05:21.894 "iscsi_target_node_add_pg_ig_maps", 00:05:21.894 "iscsi_create_target_node", 00:05:21.894 "iscsi_get_target_nodes", 00:05:21.894 "iscsi_delete_initiator_group", 00:05:21.894 "iscsi_initiator_group_remove_initiators", 00:05:21.894 "iscsi_initiator_group_add_initiators", 00:05:21.894 "iscsi_create_initiator_group", 00:05:21.894 "iscsi_get_initiator_groups", 00:05:21.894 "nvmf_set_crdt", 00:05:21.894 "nvmf_set_config", 00:05:21.894 "nvmf_set_max_subsystems", 00:05:21.894 "nvmf_stop_mdns_prr", 00:05:21.894 "nvmf_publish_mdns_prr", 00:05:21.894 "nvmf_subsystem_get_listeners", 00:05:21.894 "nvmf_subsystem_get_qpairs", 00:05:21.894 "nvmf_subsystem_get_controllers", 00:05:21.894 "nvmf_get_stats", 00:05:21.894 "nvmf_get_transports", 00:05:21.894 "nvmf_create_transport", 00:05:21.894 "nvmf_get_targets", 00:05:21.894 "nvmf_delete_target", 00:05:21.894 "nvmf_create_target", 00:05:21.894 "nvmf_subsystem_allow_any_host", 00:05:21.894 "nvmf_subsystem_set_keys", 00:05:21.894 "nvmf_subsystem_remove_host", 00:05:21.894 "nvmf_subsystem_add_host", 00:05:21.894 "nvmf_ns_remove_host", 00:05:21.894 "nvmf_ns_add_host", 00:05:21.894 "nvmf_subsystem_remove_ns", 00:05:21.894 "nvmf_subsystem_set_ns_ana_group", 00:05:21.894 "nvmf_subsystem_add_ns", 00:05:21.894 "nvmf_subsystem_listener_set_ana_state", 00:05:21.894 "nvmf_discovery_get_referrals", 00:05:21.894 "nvmf_discovery_remove_referral", 00:05:21.894 "nvmf_discovery_add_referral", 00:05:21.894 "nvmf_subsystem_remove_listener", 00:05:21.894 "nvmf_subsystem_add_listener", 00:05:21.894 "nvmf_delete_subsystem", 00:05:21.894 "nvmf_create_subsystem", 00:05:21.894 "nvmf_get_subsystems", 00:05:21.894 "env_dpdk_get_mem_stats", 00:05:21.894 "nbd_get_disks", 00:05:21.894 "nbd_stop_disk", 00:05:21.894 "nbd_start_disk", 00:05:21.894 "ublk_recover_disk", 00:05:21.894 "ublk_get_disks", 00:05:21.894 "ublk_stop_disk", 00:05:21.894 "ublk_start_disk", 00:05:21.894 "ublk_destroy_target", 00:05:21.894 "ublk_create_target", 00:05:21.894 "virtio_blk_create_transport", 00:05:21.894 "virtio_blk_get_transports", 00:05:21.894 "vhost_controller_set_coalescing", 00:05:21.894 "vhost_get_controllers", 00:05:21.894 "vhost_delete_controller", 00:05:21.894 "vhost_create_blk_controller", 00:05:21.894 "vhost_scsi_controller_remove_target", 00:05:21.894 "vhost_scsi_controller_add_target", 00:05:21.894 "vhost_start_scsi_controller", 00:05:21.894 "vhost_create_scsi_controller", 00:05:21.894 "thread_set_cpumask", 00:05:21.894 "scheduler_set_options", 00:05:21.894 "framework_get_governor", 00:05:21.894 "framework_get_scheduler", 00:05:21.894 "framework_set_scheduler", 00:05:21.894 "framework_get_reactors", 00:05:21.894 "thread_get_io_channels", 00:05:21.894 "thread_get_pollers", 00:05:21.894 "thread_get_stats", 00:05:21.894 "framework_monitor_context_switch", 00:05:21.894 "spdk_kill_instance", 00:05:21.894 "log_enable_timestamps", 00:05:21.894 "log_get_flags", 00:05:21.894 "log_clear_flag", 00:05:21.894 "log_set_flag", 00:05:21.894 "log_get_level", 00:05:21.894 "log_set_level", 00:05:21.894 "log_get_print_level", 00:05:21.894 "log_set_print_level", 00:05:21.894 "framework_enable_cpumask_locks", 00:05:21.894 "framework_disable_cpumask_locks", 00:05:21.894 "framework_wait_init", 00:05:21.894 "framework_start_init", 00:05:21.894 "scsi_get_devices", 00:05:21.894 "bdev_get_histogram", 00:05:21.894 "bdev_enable_histogram", 00:05:21.894 "bdev_set_qos_limit", 00:05:21.894 "bdev_set_qd_sampling_period", 00:05:21.894 "bdev_get_bdevs", 00:05:21.894 "bdev_reset_iostat", 00:05:21.894 "bdev_get_iostat", 00:05:21.894 "bdev_examine", 00:05:21.894 "bdev_wait_for_examine", 00:05:21.894 "bdev_set_options", 00:05:21.894 "accel_get_stats", 00:05:21.894 "accel_set_options", 00:05:21.894 "accel_set_driver", 00:05:21.894 "accel_crypto_key_destroy", 00:05:21.894 "accel_crypto_keys_get", 00:05:21.894 "accel_crypto_key_create", 00:05:21.894 "accel_assign_opc", 00:05:21.894 "accel_get_module_info", 00:05:21.894 "accel_get_opc_assignments", 00:05:21.894 "vmd_rescan", 00:05:21.894 "vmd_remove_device", 00:05:21.894 "vmd_enable", 00:05:21.894 "sock_get_default_impl", 00:05:21.894 "sock_set_default_impl", 00:05:21.894 "sock_impl_set_options", 00:05:21.894 "sock_impl_get_options", 00:05:21.894 "iobuf_get_stats", 00:05:21.894 "iobuf_set_options", 00:05:21.894 "keyring_get_keys", 00:05:21.894 "framework_get_pci_devices", 00:05:21.894 "framework_get_config", 00:05:21.894 "framework_get_subsystems", 00:05:21.894 "fsdev_set_opts", 00:05:21.894 "fsdev_get_opts", 00:05:21.894 "trace_get_info", 00:05:21.894 "trace_get_tpoint_group_mask", 00:05:21.894 "trace_disable_tpoint_group", 00:05:21.894 "trace_enable_tpoint_group", 00:05:21.894 "trace_clear_tpoint_mask", 00:05:21.894 "trace_set_tpoint_mask", 00:05:21.894 "notify_get_notifications", 00:05:21.894 "notify_get_types", 00:05:21.894 "spdk_get_version", 00:05:21.894 "rpc_get_methods" 00:05:21.894 ] 00:05:21.894 10:25:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:21.894 10:25:56 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.894 10:25:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.894 10:25:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:21.894 10:25:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71534 00:05:21.894 10:25:56 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 71534 ']' 00:05:21.894 10:25:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 71534 00:05:21.894 10:25:56 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:21.894 10:25:56 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.894 10:25:56 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71534 00:05:22.152 10:25:56 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.152 10:25:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.152 10:25:56 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71534' 00:05:22.152 killing process with pid 71534 00:05:22.152 10:25:56 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 71534 00:05:22.152 10:25:56 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 71534 00:05:22.410 00:05:22.410 real 0m1.635s 00:05:22.410 user 0m2.838s 00:05:22.410 sys 0m0.513s 00:05:22.410 10:25:56 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.410 10:25:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.410 ************************************ 00:05:22.410 END TEST spdkcli_tcp 00:05:22.410 ************************************ 00:05:22.410 10:25:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.410 10:25:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.410 10:25:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.410 10:25:56 -- common/autotest_common.sh@10 -- # set +x 00:05:22.410 ************************************ 00:05:22.410 START TEST dpdk_mem_utility 00:05:22.410 ************************************ 00:05:22.410 10:25:56 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.410 * Looking for test storage... 00:05:22.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:22.410 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:22.410 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:22.410 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:22.410 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.410 10:25:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:22.410 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.410 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:22.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.410 --rc genhtml_branch_coverage=1 00:05:22.410 --rc genhtml_function_coverage=1 00:05:22.410 --rc genhtml_legend=1 00:05:22.410 --rc geninfo_all_blocks=1 00:05:22.410 --rc geninfo_unexecuted_blocks=1 00:05:22.410 00:05:22.410 ' 00:05:22.410 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:22.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.410 --rc genhtml_branch_coverage=1 00:05:22.410 --rc genhtml_function_coverage=1 00:05:22.410 --rc genhtml_legend=1 00:05:22.411 --rc geninfo_all_blocks=1 00:05:22.411 --rc geninfo_unexecuted_blocks=1 00:05:22.411 00:05:22.411 ' 00:05:22.411 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:22.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.411 --rc genhtml_branch_coverage=1 00:05:22.411 --rc genhtml_function_coverage=1 00:05:22.411 --rc genhtml_legend=1 00:05:22.411 --rc geninfo_all_blocks=1 00:05:22.411 --rc geninfo_unexecuted_blocks=1 00:05:22.411 00:05:22.411 ' 00:05:22.411 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:22.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.411 --rc genhtml_branch_coverage=1 00:05:22.411 --rc genhtml_function_coverage=1 00:05:22.411 --rc genhtml_legend=1 00:05:22.411 --rc geninfo_all_blocks=1 00:05:22.411 --rc geninfo_unexecuted_blocks=1 00:05:22.411 00:05:22.411 ' 00:05:22.411 10:25:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:22.411 10:25:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71629 00:05:22.411 10:25:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71629 00:05:22.411 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71629 ']' 00:05:22.411 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.411 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.411 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.411 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.411 10:25:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.411 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.411 [2024-09-28 10:25:57.150463] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:22.411 [2024-09-28 10:25:57.150579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71629 ] 00:05:22.669 [2024-09-28 10:25:57.278398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:22.669 [2024-09-28 10:25:57.300121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.669 [2024-09-28 10:25:57.330756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.235 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.235 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:23.235 10:25:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:23.235 10:25:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:23.235 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.235 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.235 { 00:05:23.235 "filename": "/tmp/spdk_mem_dump.txt" 00:05:23.235 } 00:05:23.235 10:25:57 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.235 10:25:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:23.548 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:23.548 1 heaps totaling size 860.000000 MiB 00:05:23.548 size: 860.000000 MiB heap id: 0 00:05:23.548 end heaps---------- 00:05:23.548 9 mempools totaling size 642.649841 MiB 00:05:23.548 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:23.548 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:23.548 size: 92.545471 MiB name: bdev_io_71629 00:05:23.548 size: 51.011292 MiB name: evtpool_71629 00:05:23.548 size: 50.003479 MiB name: msgpool_71629 00:05:23.548 size: 36.509338 MiB name: fsdev_io_71629 00:05:23.548 size: 21.763794 MiB name: PDU_Pool 00:05:23.548 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:23.548 size: 0.026123 MiB name: Session_Pool 00:05:23.548 end mempools------- 00:05:23.548 6 memzones totaling size 4.142822 MiB 00:05:23.548 size: 1.000366 MiB name: RG_ring_0_71629 00:05:23.548 size: 1.000366 MiB name: RG_ring_1_71629 00:05:23.548 size: 1.000366 MiB name: RG_ring_4_71629 00:05:23.548 size: 1.000366 MiB name: RG_ring_5_71629 00:05:23.548 size: 0.125366 MiB name: RG_ring_2_71629 00:05:23.548 size: 0.015991 MiB name: RG_ring_3_71629 00:05:23.548 end memzones------- 00:05:23.548 10:25:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:23.548 heap id: 0 total size: 860.000000 MiB number of busy elements: 303 number of free elements: 16 00:05:23.548 list of free elements. size: 13.937256 MiB 00:05:23.548 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:23.548 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:23.548 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:23.548 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:23.548 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:23.548 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:23.548 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:23.548 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:23.548 element at address: 0x200000200000 with size: 0.834839 MiB 00:05:23.548 element at address: 0x20001d800000 with size: 0.568237 MiB 00:05:23.548 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:23.548 element at address: 0x200003e00000 with size: 0.488647 MiB 00:05:23.548 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:23.548 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:23.548 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:23.548 element at address: 0x200003a00000 with size: 0.353027 MiB 00:05:23.548 list of standard malloc elements. size: 199.266052 MiB 00:05:23.548 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:23.548 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:23.548 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:23.548 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:23.548 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:23.548 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:23.548 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:23.548 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:23.548 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:23.548 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:23.549 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:23.549 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:23.549 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:23.550 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:23.551 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:23.551 list of memzone associated elements. size: 646.796692 MiB 00:05:23.551 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:23.551 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:23.551 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:23.551 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:23.551 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:23.551 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71629_0 00:05:23.551 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:23.551 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71629_0 00:05:23.551 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:23.551 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71629_0 00:05:23.551 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:23.551 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71629_0 00:05:23.551 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:23.551 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:23.551 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:23.551 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:23.551 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:23.551 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71629 00:05:23.551 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:23.551 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71629 00:05:23.551 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:23.551 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71629 00:05:23.551 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:23.551 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:23.551 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:23.551 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:23.551 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:23.551 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:23.551 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:23.551 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:23.551 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:23.551 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71629 00:05:23.551 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:23.551 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71629 00:05:23.551 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:23.551 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71629 00:05:23.551 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:23.551 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71629 00:05:23.551 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:23.551 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71629 00:05:23.551 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:23.551 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71629 00:05:23.551 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:23.551 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:23.551 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:23.551 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:23.551 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:23.551 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:23.551 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:05:23.551 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71629 00:05:23.551 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:23.551 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:23.551 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:23.551 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:23.551 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:05:23.551 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71629 00:05:23.551 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:23.551 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:23.551 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:23.551 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71629 00:05:23.551 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:23.551 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71629 00:05:23.551 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:05:23.551 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71629 00:05:23.551 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:23.551 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:23.551 10:25:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:23.551 10:25:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71629 00:05:23.551 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71629 ']' 00:05:23.551 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71629 00:05:23.551 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:23.551 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.551 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71629 00:05:23.551 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.551 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.551 killing process with pid 71629 00:05:23.551 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71629' 00:05:23.551 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71629 00:05:23.551 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71629 00:05:23.813 00:05:23.813 real 0m1.398s 00:05:23.813 user 0m1.452s 00:05:23.813 sys 0m0.346s 00:05:23.813 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.813 10:25:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.813 ************************************ 00:05:23.813 END TEST dpdk_mem_utility 00:05:23.813 ************************************ 00:05:23.813 10:25:58 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:23.813 10:25:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.813 10:25:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.813 10:25:58 -- common/autotest_common.sh@10 -- # set +x 00:05:23.813 ************************************ 00:05:23.813 START TEST event 00:05:23.813 ************************************ 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:23.813 * Looking for test storage... 00:05:23.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.813 10:25:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.813 10:25:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.813 10:25:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.813 10:25:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.813 10:25:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.813 10:25:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.813 10:25:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.813 10:25:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.813 10:25:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.813 10:25:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.813 10:25:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.813 10:25:58 event -- scripts/common.sh@344 -- # case "$op" in 00:05:23.813 10:25:58 event -- scripts/common.sh@345 -- # : 1 00:05:23.813 10:25:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.813 10:25:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.813 10:25:58 event -- scripts/common.sh@365 -- # decimal 1 00:05:23.813 10:25:58 event -- scripts/common.sh@353 -- # local d=1 00:05:23.813 10:25:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.813 10:25:58 event -- scripts/common.sh@355 -- # echo 1 00:05:23.813 10:25:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.813 10:25:58 event -- scripts/common.sh@366 -- # decimal 2 00:05:23.813 10:25:58 event -- scripts/common.sh@353 -- # local d=2 00:05:23.813 10:25:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.813 10:25:58 event -- scripts/common.sh@355 -- # echo 2 00:05:23.813 10:25:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.813 10:25:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.813 10:25:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.813 10:25:58 event -- scripts/common.sh@368 -- # return 0 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.813 --rc genhtml_branch_coverage=1 00:05:23.813 --rc genhtml_function_coverage=1 00:05:23.813 --rc genhtml_legend=1 00:05:23.813 --rc geninfo_all_blocks=1 00:05:23.813 --rc geninfo_unexecuted_blocks=1 00:05:23.813 00:05:23.813 ' 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.813 --rc genhtml_branch_coverage=1 00:05:23.813 --rc genhtml_function_coverage=1 00:05:23.813 --rc genhtml_legend=1 00:05:23.813 --rc geninfo_all_blocks=1 00:05:23.813 --rc geninfo_unexecuted_blocks=1 00:05:23.813 00:05:23.813 ' 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.813 --rc genhtml_branch_coverage=1 00:05:23.813 --rc genhtml_function_coverage=1 00:05:23.813 --rc genhtml_legend=1 00:05:23.813 --rc geninfo_all_blocks=1 00:05:23.813 --rc geninfo_unexecuted_blocks=1 00:05:23.813 00:05:23.813 ' 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.813 --rc genhtml_branch_coverage=1 00:05:23.813 --rc genhtml_function_coverage=1 00:05:23.813 --rc genhtml_legend=1 00:05:23.813 --rc geninfo_all_blocks=1 00:05:23.813 --rc geninfo_unexecuted_blocks=1 00:05:23.813 00:05:23.813 ' 00:05:23.813 10:25:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:23.813 10:25:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:23.813 10:25:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:23.813 10:25:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.813 10:25:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.813 ************************************ 00:05:23.814 START TEST event_perf 00:05:23.814 ************************************ 00:05:23.814 10:25:58 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.814 Running I/O for 1 seconds...[2024-09-28 10:25:58.573665] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:23.814 [2024-09-28 10:25:58.573781] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71709 ] 00:05:24.072 [2024-09-28 10:25:58.701899] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:24.072 [2024-09-28 10:25:58.722375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.072 [2024-09-28 10:25:58.756732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.072 Running I/O for 1 seconds...[2024-09-28 10:25:58.757451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.072 [2024-09-28 10:25:58.757464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.072 [2024-09-28 10:25:58.757551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.449 00:05:25.449 lcore 0: 196837 00:05:25.449 lcore 1: 196833 00:05:25.449 lcore 2: 196834 00:05:25.449 lcore 3: 196836 00:05:25.449 done. 00:05:25.449 00:05:25.449 real 0m1.269s 00:05:25.449 user 0m4.062s 00:05:25.449 sys 0m0.090s 00:05:25.449 10:25:59 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.449 10:25:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.449 ************************************ 00:05:25.449 END TEST event_perf 00:05:25.449 ************************************ 00:05:25.449 10:25:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:25.449 10:25:59 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:25.449 10:25:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.449 10:25:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.449 ************************************ 00:05:25.449 START TEST event_reactor 00:05:25.449 ************************************ 00:05:25.449 10:25:59 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:25.449 [2024-09-28 10:25:59.887171] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:25.449 [2024-09-28 10:25:59.887255] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71743 ] 00:05:25.449 [2024-09-28 10:26:00.008431] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.449 [2024-09-28 10:26:00.026954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.449 [2024-09-28 10:26:00.058258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.385 test_start 00:05:26.385 oneshot 00:05:26.385 tick 100 00:05:26.385 tick 100 00:05:26.385 tick 250 00:05:26.385 tick 100 00:05:26.385 tick 100 00:05:26.385 tick 250 00:05:26.385 tick 100 00:05:26.385 tick 500 00:05:26.385 tick 100 00:05:26.385 tick 100 00:05:26.385 tick 250 00:05:26.385 tick 100 00:05:26.385 tick 100 00:05:26.385 test_end 00:05:26.385 00:05:26.385 real 0m1.252s 00:05:26.385 user 0m1.082s 00:05:26.385 sys 0m0.062s 00:05:26.385 10:26:01 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.385 10:26:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:26.385 ************************************ 00:05:26.385 END TEST event_reactor 00:05:26.385 ************************************ 00:05:26.385 10:26:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.385 10:26:01 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:26.385 10:26:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.385 10:26:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.385 ************************************ 00:05:26.385 START TEST event_reactor_perf 00:05:26.385 ************************************ 00:05:26.385 10:26:01 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.643 [2024-09-28 10:26:01.183469] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:26.643 [2024-09-28 10:26:01.183580] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71780 ] 00:05:26.643 [2024-09-28 10:26:01.310205] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.643 [2024-09-28 10:26:01.331953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.643 [2024-09-28 10:26:01.362763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.018 test_start 00:05:28.018 test_end 00:05:28.018 Performance: 315860 events per second 00:05:28.018 00:05:28.018 real 0m1.262s 00:05:28.018 user 0m1.084s 00:05:28.018 sys 0m0.071s 00:05:28.018 10:26:02 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.018 10:26:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.018 ************************************ 00:05:28.018 END TEST event_reactor_perf 00:05:28.018 ************************************ 00:05:28.018 10:26:02 event -- event/event.sh@49 -- # uname -s 00:05:28.018 10:26:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:28.018 10:26:02 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:28.018 10:26:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.018 10:26:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.018 10:26:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.018 ************************************ 00:05:28.018 START TEST event_scheduler 00:05:28.018 ************************************ 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:28.018 * Looking for test storage... 00:05:28.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.018 10:26:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.018 --rc genhtml_branch_coverage=1 00:05:28.018 --rc genhtml_function_coverage=1 00:05:28.018 --rc genhtml_legend=1 00:05:28.018 --rc geninfo_all_blocks=1 00:05:28.018 --rc geninfo_unexecuted_blocks=1 00:05:28.018 00:05:28.018 ' 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.018 --rc genhtml_branch_coverage=1 00:05:28.018 --rc genhtml_function_coverage=1 00:05:28.018 --rc genhtml_legend=1 00:05:28.018 --rc geninfo_all_blocks=1 00:05:28.018 --rc geninfo_unexecuted_blocks=1 00:05:28.018 00:05:28.018 ' 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.018 --rc genhtml_branch_coverage=1 00:05:28.018 --rc genhtml_function_coverage=1 00:05:28.018 --rc genhtml_legend=1 00:05:28.018 --rc geninfo_all_blocks=1 00:05:28.018 --rc geninfo_unexecuted_blocks=1 00:05:28.018 00:05:28.018 ' 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.018 --rc genhtml_branch_coverage=1 00:05:28.018 --rc genhtml_function_coverage=1 00:05:28.018 --rc genhtml_legend=1 00:05:28.018 --rc geninfo_all_blocks=1 00:05:28.018 --rc geninfo_unexecuted_blocks=1 00:05:28.018 00:05:28.018 ' 00:05:28.018 10:26:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:28.018 10:26:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71846 00:05:28.018 10:26:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.018 10:26:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71846 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71846 ']' 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.018 10:26:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:28.018 10:26:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.018 [2024-09-28 10:26:02.655411] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:28.018 [2024-09-28 10:26:02.655531] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71846 ] 00:05:28.018 [2024-09-28 10:26:02.783640] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:28.277 [2024-09-28 10:26:02.803164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.277 [2024-09-28 10:26:02.836687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.277 [2024-09-28 10:26:02.837057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.277 [2024-09-28 10:26:02.837118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.277 [2024-09-28 10:26:02.837171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:28.844 10:26:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.844 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.844 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.844 POWER: Cannot set governor of lcore 0 to performance 00:05:28.844 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.844 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.844 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.844 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.844 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:28.844 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:28.844 POWER: Unable to set Power Management Environment for lcore 0 00:05:28.844 [2024-09-28 10:26:03.506470] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:28.844 [2024-09-28 10:26:03.506483] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:28.844 [2024-09-28 10:26:03.506493] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:28.844 [2024-09-28 10:26:03.506505] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:28.844 [2024-09-28 10:26:03.506514] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:28.844 [2024-09-28 10:26:03.506521] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.844 10:26:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 [2024-09-28 10:26:03.560178] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.844 10:26:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.844 10:26:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 ************************************ 00:05:28.844 START TEST scheduler_create_thread 00:05:28.844 ************************************ 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 2 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 3 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 4 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 5 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 6 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.844 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.103 7 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.103 8 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.103 9 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.103 10 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.103 10:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.670 10:26:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.670 10:26:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:29.670 10:26:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.670 10:26:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.044 10:26:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.044 10:26:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:31.044 10:26:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:31.044 10:26:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.044 10:26:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.977 ************************************ 00:05:31.977 END TEST scheduler_create_thread 00:05:31.977 ************************************ 00:05:31.977 10:26:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.977 00:05:31.977 real 0m3.093s 00:05:31.977 user 0m0.012s 00:05:31.977 sys 0m0.006s 00:05:31.977 10:26:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.977 10:26:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.977 10:26:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:31.977 10:26:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71846 00:05:31.977 10:26:06 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71846 ']' 00:05:31.977 10:26:06 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71846 00:05:31.977 10:26:06 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:31.977 10:26:06 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.977 10:26:06 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71846 00:05:31.977 killing process with pid 71846 00:05:31.977 10:26:06 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:31.977 10:26:06 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:31.977 10:26:06 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71846' 00:05:31.977 10:26:06 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71846 00:05:31.977 10:26:06 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71846 00:05:32.547 [2024-09-28 10:26:07.039560] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:32.547 00:05:32.547 real 0m4.756s 00:05:32.547 user 0m9.014s 00:05:32.547 sys 0m0.323s 00:05:32.547 10:26:07 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.547 10:26:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.547 ************************************ 00:05:32.547 END TEST event_scheduler 00:05:32.547 ************************************ 00:05:32.547 10:26:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:32.547 10:26:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:32.547 10:26:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.547 10:26:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.547 10:26:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.547 ************************************ 00:05:32.547 START TEST app_repeat 00:05:32.547 ************************************ 00:05:32.547 10:26:07 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71951 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.547 Process app_repeat pid: 71951 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71951' 00:05:32.547 spdk_app_start Round 0 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71951 /var/tmp/spdk-nbd.sock 00:05:32.547 10:26:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:32.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.547 10:26:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71951 ']' 00:05:32.547 10:26:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.547 10:26:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.547 10:26:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.547 10:26:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.547 10:26:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.806 [2024-09-28 10:26:07.328280] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:32.806 [2024-09-28 10:26:07.328382] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71951 ] 00:05:32.806 [2024-09-28 10:26:07.458037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:32.806 [2024-09-28 10:26:07.478326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.806 [2024-09-28 10:26:07.531851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.806 [2024-09-28 10:26:07.531890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.764 10:26:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.764 10:26:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:33.764 10:26:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.022 Malloc0 00:05:34.022 10:26:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.022 Malloc1 00:05:34.022 10:26:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.022 10:26:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.282 /dev/nbd0 00:05:34.282 10:26:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.282 10:26:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.282 1+0 records in 00:05:34.282 1+0 records out 00:05:34.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261152 s, 15.7 MB/s 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:34.282 10:26:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:34.282 10:26:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.282 10:26:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.282 10:26:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.542 /dev/nbd1 00:05:34.542 10:26:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.542 10:26:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.542 1+0 records in 00:05:34.542 1+0 records out 00:05:34.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321804 s, 12.7 MB/s 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:34.542 10:26:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:34.542 10:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.542 10:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.542 10:26:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.542 10:26:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.542 10:26:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.801 10:26:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.801 { 00:05:34.801 "nbd_device": "/dev/nbd0", 00:05:34.801 "bdev_name": "Malloc0" 00:05:34.801 }, 00:05:34.801 { 00:05:34.801 "nbd_device": "/dev/nbd1", 00:05:34.801 "bdev_name": "Malloc1" 00:05:34.801 } 00:05:34.801 ]' 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.802 { 00:05:34.802 "nbd_device": "/dev/nbd0", 00:05:34.802 "bdev_name": "Malloc0" 00:05:34.802 }, 00:05:34.802 { 00:05:34.802 "nbd_device": "/dev/nbd1", 00:05:34.802 "bdev_name": "Malloc1" 00:05:34.802 } 00:05:34.802 ]' 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.802 /dev/nbd1' 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.802 /dev/nbd1' 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.802 256+0 records in 00:05:34.802 256+0 records out 00:05:34.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773494 s, 136 MB/s 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.802 256+0 records in 00:05:34.802 256+0 records out 00:05:34.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199033 s, 52.7 MB/s 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.802 256+0 records in 00:05:34.802 256+0 records out 00:05:34.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185688 s, 56.5 MB/s 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.802 10:26:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.060 10:26:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.060 10:26:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.060 10:26:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.060 10:26:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.060 10:26:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.060 10:26:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.060 10:26:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.060 10:26:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.060 10:26:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.060 10:26:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.318 10:26:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.577 10:26:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.577 10:26:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.834 10:26:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.834 [2024-09-28 10:26:10.465943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.834 [2024-09-28 10:26:10.495953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.834 [2024-09-28 10:26:10.495954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.834 [2024-09-28 10:26:10.525591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.834 [2024-09-28 10:26:10.525640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.112 10:26:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.112 spdk_app_start Round 1 00:05:39.112 10:26:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:39.112 10:26:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71951 /var/tmp/spdk-nbd.sock 00:05:39.112 10:26:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71951 ']' 00:05:39.112 10:26:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.112 10:26:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.112 10:26:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.112 10:26:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.112 10:26:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.112 10:26:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.112 10:26:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:39.112 10:26:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.112 Malloc0 00:05:39.112 10:26:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.370 Malloc1 00:05:39.370 10:26:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.370 10:26:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.370 10:26:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.370 10:26:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.370 10:26:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.370 10:26:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.370 10:26:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.370 10:26:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.370 10:26:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.370 10:26:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.370 10:26:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.370 10:26:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.370 10:26:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.370 10:26:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.370 10:26:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.370 10:26:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.627 /dev/nbd0 00:05:39.627 10:26:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.627 10:26:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.627 1+0 records in 00:05:39.627 1+0 records out 00:05:39.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250185 s, 16.4 MB/s 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:39.627 10:26:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:39.627 10:26:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.627 10:26:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.627 10:26:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.885 /dev/nbd1 00:05:39.885 10:26:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.885 10:26:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.885 1+0 records in 00:05:39.885 1+0 records out 00:05:39.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231413 s, 17.7 MB/s 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:39.885 10:26:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:39.885 10:26:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.885 10:26:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.885 10:26:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.885 10:26:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.885 10:26:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.144 { 00:05:40.144 "nbd_device": "/dev/nbd0", 00:05:40.144 "bdev_name": "Malloc0" 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "nbd_device": "/dev/nbd1", 00:05:40.144 "bdev_name": "Malloc1" 00:05:40.144 } 00:05:40.144 ]' 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.144 { 00:05:40.144 "nbd_device": "/dev/nbd0", 00:05:40.144 "bdev_name": "Malloc0" 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "nbd_device": "/dev/nbd1", 00:05:40.144 "bdev_name": "Malloc1" 00:05:40.144 } 00:05:40.144 ]' 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.144 /dev/nbd1' 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.144 /dev/nbd1' 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.144 256+0 records in 00:05:40.144 256+0 records out 00:05:40.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120266 s, 87.2 MB/s 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.144 256+0 records in 00:05:40.144 256+0 records out 00:05:40.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153144 s, 68.5 MB/s 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.144 256+0 records in 00:05:40.144 256+0 records out 00:05:40.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186122 s, 56.3 MB/s 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.144 10:26:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.402 10:26:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.402 10:26:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.402 10:26:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.403 10:26:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.403 10:26:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.403 10:26:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.403 10:26:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.403 10:26:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.403 10:26:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.403 10:26:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.661 10:26:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.661 10:26:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.919 10:26:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.186 [2024-09-28 10:26:15.724237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.186 [2024-09-28 10:26:15.751142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.186 [2024-09-28 10:26:15.751237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.186 [2024-09-28 10:26:15.781156] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.186 [2024-09-28 10:26:15.781209] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.481 spdk_app_start Round 2 00:05:44.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.481 10:26:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.481 10:26:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:44.481 10:26:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71951 /var/tmp/spdk-nbd.sock 00:05:44.481 10:26:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71951 ']' 00:05:44.481 10:26:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.481 10:26:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.481 10:26:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.481 10:26:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.481 10:26:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.481 10:26:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.481 10:26:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:44.481 10:26:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.481 Malloc0 00:05:44.481 10:26:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.481 Malloc1 00:05:44.739 10:26:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.739 /dev/nbd0 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.739 10:26:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.739 1+0 records in 00:05:44.739 1+0 records out 00:05:44.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036605 s, 11.2 MB/s 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.739 10:26:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.740 10:26:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.740 10:26:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.740 10:26:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.740 10:26:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.740 10:26:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.000 /dev/nbd1 00:05:45.000 10:26:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.000 10:26:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.000 1+0 records in 00:05:45.000 1+0 records out 00:05:45.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266508 s, 15.4 MB/s 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.000 10:26:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.000 10:26:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.000 10:26:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.000 10:26:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.000 10:26:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.000 10:26:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.259 { 00:05:45.259 "nbd_device": "/dev/nbd0", 00:05:45.259 "bdev_name": "Malloc0" 00:05:45.259 }, 00:05:45.259 { 00:05:45.259 "nbd_device": "/dev/nbd1", 00:05:45.259 "bdev_name": "Malloc1" 00:05:45.259 } 00:05:45.259 ]' 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.259 { 00:05:45.259 "nbd_device": "/dev/nbd0", 00:05:45.259 "bdev_name": "Malloc0" 00:05:45.259 }, 00:05:45.259 { 00:05:45.259 "nbd_device": "/dev/nbd1", 00:05:45.259 "bdev_name": "Malloc1" 00:05:45.259 } 00:05:45.259 ]' 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.259 /dev/nbd1' 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.259 /dev/nbd1' 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.259 10:26:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.260 10:26:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.260 10:26:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.260 10:26:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.260 256+0 records in 00:05:45.260 256+0 records out 00:05:45.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129349 s, 81.1 MB/s 00:05:45.260 10:26:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.260 10:26:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.260 256+0 records in 00:05:45.260 256+0 records out 00:05:45.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014461 s, 72.5 MB/s 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.260 256+0 records in 00:05:45.260 256+0 records out 00:05:45.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166108 s, 63.1 MB/s 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.260 10:26:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.518 10:26:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.776 10:26:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.034 10:26:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.034 10:26:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.035 10:26:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.035 10:26:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.035 10:26:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.035 10:26:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.035 10:26:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.035 10:26:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.035 10:26:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.035 10:26:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.035 10:26:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.035 10:26:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.035 10:26:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.293 10:26:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.293 [2024-09-28 10:26:20.994105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.293 [2024-09-28 10:26:21.021496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.293 [2024-09-28 10:26:21.021604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.293 [2024-09-28 10:26:21.052111] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.293 [2024-09-28 10:26:21.052328] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.576 10:26:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71951 /var/tmp/spdk-nbd.sock 00:05:49.576 10:26:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71951 ']' 00:05:49.576 10:26:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.576 10:26:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.576 10:26:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.576 10:26:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.576 10:26:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:49.576 10:26:24 event.app_repeat -- event/event.sh@39 -- # killprocess 71951 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71951 ']' 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71951 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71951 00:05:49.576 killing process with pid 71951 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71951' 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71951 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71951 00:05:49.576 spdk_app_start is called in Round 0. 00:05:49.576 Shutdown signal received, stop current app iteration 00:05:49.576 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 reinitialization... 00:05:49.576 spdk_app_start is called in Round 1. 00:05:49.576 Shutdown signal received, stop current app iteration 00:05:49.576 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 reinitialization... 00:05:49.576 spdk_app_start is called in Round 2. 00:05:49.576 Shutdown signal received, stop current app iteration 00:05:49.576 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 reinitialization... 00:05:49.576 spdk_app_start is called in Round 3. 00:05:49.576 Shutdown signal received, stop current app iteration 00:05:49.576 ************************************ 00:05:49.576 END TEST app_repeat 00:05:49.576 ************************************ 00:05:49.576 10:26:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:49.576 10:26:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:49.576 00:05:49.576 real 0m16.975s 00:05:49.576 user 0m37.919s 00:05:49.576 sys 0m2.044s 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.576 10:26:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.576 10:26:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:49.576 10:26:24 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:49.576 10:26:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.576 10:26:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.576 10:26:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.576 ************************************ 00:05:49.576 START TEST cpu_locks 00:05:49.576 ************************************ 00:05:49.576 10:26:24 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:49.834 * Looking for test storage... 00:05:49.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:49.834 10:26:24 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.834 10:26:24 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.834 10:26:24 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.834 10:26:24 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.834 10:26:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.834 10:26:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.835 10:26:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:49.835 10:26:24 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.835 10:26:24 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.835 --rc genhtml_branch_coverage=1 00:05:49.835 --rc genhtml_function_coverage=1 00:05:49.835 --rc genhtml_legend=1 00:05:49.835 --rc geninfo_all_blocks=1 00:05:49.835 --rc geninfo_unexecuted_blocks=1 00:05:49.835 00:05:49.835 ' 00:05:49.835 10:26:24 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.835 --rc genhtml_branch_coverage=1 00:05:49.835 --rc genhtml_function_coverage=1 00:05:49.835 --rc genhtml_legend=1 00:05:49.835 --rc geninfo_all_blocks=1 00:05:49.835 --rc geninfo_unexecuted_blocks=1 00:05:49.835 00:05:49.835 ' 00:05:49.835 10:26:24 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.835 --rc genhtml_branch_coverage=1 00:05:49.835 --rc genhtml_function_coverage=1 00:05:49.835 --rc genhtml_legend=1 00:05:49.835 --rc geninfo_all_blocks=1 00:05:49.835 --rc geninfo_unexecuted_blocks=1 00:05:49.835 00:05:49.835 ' 00:05:49.835 10:26:24 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.835 --rc genhtml_branch_coverage=1 00:05:49.835 --rc genhtml_function_coverage=1 00:05:49.835 --rc genhtml_legend=1 00:05:49.835 --rc geninfo_all_blocks=1 00:05:49.835 --rc geninfo_unexecuted_blocks=1 00:05:49.835 00:05:49.835 ' 00:05:49.835 10:26:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:49.835 10:26:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:49.835 10:26:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:49.835 10:26:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:49.835 10:26:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.835 10:26:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.835 10:26:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.835 ************************************ 00:05:49.835 START TEST default_locks 00:05:49.835 ************************************ 00:05:49.835 10:26:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:49.835 10:26:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72370 00:05:49.835 10:26:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72370 00:05:49.835 10:26:24 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72370 ']' 00:05:49.835 10:26:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.835 10:26:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.835 10:26:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.835 10:26:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.835 10:26:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.835 10:26:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.835 [2024-09-28 10:26:24.518383] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:49.835 [2024-09-28 10:26:24.518500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72370 ] 00:05:50.094 [2024-09-28 10:26:24.646087] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.094 [2024-09-28 10:26:24.662875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.094 [2024-09-28 10:26:24.692937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.660 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.660 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:50.660 10:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72370 00:05:50.660 10:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72370 00:05:50.660 10:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72370 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 72370 ']' 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 72370 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72370 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.919 killing process with pid 72370 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72370' 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 72370 00:05:50.919 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 72370 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72370 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72370 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 72370 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72370 ']' 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.178 ERROR: process (pid: 72370) is no longer running 00:05:51.178 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72370) - No such process 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.178 00:05:51.178 real 0m1.398s 00:05:51.178 user 0m1.431s 00:05:51.178 sys 0m0.420s 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.178 ************************************ 00:05:51.178 END TEST default_locks 00:05:51.178 10:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.178 ************************************ 00:05:51.178 10:26:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:51.178 10:26:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.178 10:26:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.178 10:26:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.178 ************************************ 00:05:51.178 START TEST default_locks_via_rpc 00:05:51.178 ************************************ 00:05:51.178 10:26:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:51.178 10:26:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72418 00:05:51.178 10:26:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72418 00:05:51.178 10:26:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72418 ']' 00:05:51.178 10:26:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.178 10:26:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.178 10:26:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.178 10:26:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.178 10:26:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.178 10:26:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.178 [2024-09-28 10:26:25.953503] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:51.178 [2024-09-28 10:26:25.953623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72418 ] 00:05:51.436 [2024-09-28 10:26:26.081238] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:51.436 [2024-09-28 10:26:26.090915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.436 [2024-09-28 10:26:26.120337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72418 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72418 00:05:52.002 10:26:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72418 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 72418 ']' 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 72418 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72418 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.261 killing process with pid 72418 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72418' 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 72418 00:05:52.261 10:26:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 72418 00:05:52.519 00:05:52.519 real 0m1.326s 00:05:52.519 user 0m1.357s 00:05:52.519 sys 0m0.377s 00:05:52.519 ************************************ 00:05:52.519 END TEST default_locks_via_rpc 00:05:52.519 ************************************ 00:05:52.519 10:26:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.519 10:26:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.519 10:26:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:52.519 10:26:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.519 10:26:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.519 10:26:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.519 ************************************ 00:05:52.519 START TEST non_locking_app_on_locked_coremask 00:05:52.519 ************************************ 00:05:52.519 10:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:52.519 10:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72459 00:05:52.519 10:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72459 /var/tmp/spdk.sock 00:05:52.519 10:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72459 ']' 00:05:52.519 10:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.519 10:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.519 10:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.519 10:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.519 10:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.519 10:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.777 [2024-09-28 10:26:27.320341] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:52.777 [2024-09-28 10:26:27.320457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72459 ] 00:05:52.777 [2024-09-28 10:26:27.448794] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.777 [2024-09-28 10:26:27.464879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.777 [2024-09-28 10:26:27.493801] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72475 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72475 /var/tmp/spdk2.sock 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72475 ']' 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:53.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.342 10:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.600 [2024-09-28 10:26:28.177059] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:53.600 [2024-09-28 10:26:28.177176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72475 ] 00:05:53.600 [2024-09-28 10:26:28.306368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.600 [2024-09-28 10:26:28.323716] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.600 [2024-09-28 10:26:28.323747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.858 [2024-09-28 10:26:28.380860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.426 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.426 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:54.426 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72459 00:05:54.426 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72459 00:05:54.426 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72459 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72459 ']' 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72459 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72459 00:05:54.685 killing process with pid 72459 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72459' 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72459 00:05:54.685 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72459 00:05:55.252 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72475 00:05:55.252 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72475 ']' 00:05:55.252 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72475 00:05:55.252 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.252 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.252 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72475 00:05:55.252 killing process with pid 72475 00:05:55.252 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.252 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.252 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72475' 00:05:55.252 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72475 00:05:55.253 10:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72475 00:05:55.512 00:05:55.512 real 0m2.843s 00:05:55.512 user 0m3.132s 00:05:55.512 sys 0m0.774s 00:05:55.512 10:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.512 10:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.512 ************************************ 00:05:55.512 END TEST non_locking_app_on_locked_coremask 00:05:55.512 ************************************ 00:05:55.512 10:26:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:55.512 10:26:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.512 10:26:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.512 10:26:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.512 ************************************ 00:05:55.512 START TEST locking_app_on_unlocked_coremask 00:05:55.512 ************************************ 00:05:55.512 10:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:55.512 10:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72533 00:05:55.512 10:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72533 /var/tmp/spdk.sock 00:05:55.512 10:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72533 ']' 00:05:55.512 10:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.512 10:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.512 10:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.512 10:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.512 10:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.512 10:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:55.512 [2024-09-28 10:26:30.205044] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:55.512 [2024-09-28 10:26:30.205717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72533 ] 00:05:55.771 [2024-09-28 10:26:30.333383] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.772 [2024-09-28 10:26:30.350240] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.772 [2024-09-28 10:26:30.350273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.772 [2024-09-28 10:26:30.380109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72549 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72549 /var/tmp/spdk2.sock 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72549 ']' 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.347 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.624 [2024-09-28 10:26:31.128805] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:56.624 [2024-09-28 10:26:31.129188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72549 ] 00:05:56.624 [2024-09-28 10:26:31.272528] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.624 [2024-09-28 10:26:31.290131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.624 [2024-09-28 10:26:31.347819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.559 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.559 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:57.559 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72549 00:05:57.559 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72549 00:05:57.559 10:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.817 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72533 00:05:57.817 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72533 ']' 00:05:57.817 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72533 00:05:57.817 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.817 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.817 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72533 00:05:57.817 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.817 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.817 killing process with pid 72533 00:05:57.817 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72533' 00:05:57.817 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72533 00:05:57.818 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72533 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72549 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72549 ']' 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72549 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72549 00:05:58.383 killing process with pid 72549 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72549' 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72549 00:05:58.383 10:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72549 00:05:58.640 00:05:58.640 real 0m3.085s 00:05:58.640 user 0m3.426s 00:05:58.640 sys 0m0.828s 00:05:58.640 10:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.640 10:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.640 ************************************ 00:05:58.640 END TEST locking_app_on_unlocked_coremask 00:05:58.640 ************************************ 00:05:58.640 10:26:33 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:58.641 10:26:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.641 10:26:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.641 10:26:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.641 ************************************ 00:05:58.641 START TEST locking_app_on_locked_coremask 00:05:58.641 ************************************ 00:05:58.641 10:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:58.641 10:26:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72607 00:05:58.641 10:26:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72607 /var/tmp/spdk.sock 00:05:58.641 10:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72607 ']' 00:05:58.641 10:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.641 10:26:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.641 10:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.641 10:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.641 10:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.641 10:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.641 [2024-09-28 10:26:33.328234] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:58.641 [2024-09-28 10:26:33.328362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72607 ] 00:05:58.897 [2024-09-28 10:26:33.456187] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.897 [2024-09-28 10:26:33.475226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.897 [2024-09-28 10:26:33.517315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72623 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72623 /var/tmp/spdk2.sock 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72623 /var/tmp/spdk2.sock 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:59.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72623 /var/tmp/spdk2.sock 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72623 ']' 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.463 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.463 [2024-09-28 10:26:34.224309] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:59.463 [2024-09-28 10:26:34.224426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72623 ] 00:05:59.721 [2024-09-28 10:26:34.354629] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.721 [2024-09-28 10:26:34.379442] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72607 has claimed it. 00:05:59.721 [2024-09-28 10:26:34.379511] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.289 ERROR: process (pid: 72623) is no longer running 00:06:00.289 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72623) - No such process 00:06:00.289 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.289 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:00.289 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:00.289 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.289 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.289 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.289 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72607 00:06:00.289 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.289 10:26:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72607 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72607 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72607 ']' 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72607 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72607 00:06:00.289 killing process with pid 72607 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72607' 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72607 00:06:00.289 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72607 00:06:00.860 ************************************ 00:06:00.860 END TEST locking_app_on_locked_coremask 00:06:00.860 ************************************ 00:06:00.860 00:06:00.860 real 0m2.157s 00:06:00.860 user 0m2.360s 00:06:00.860 sys 0m0.534s 00:06:00.860 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.860 10:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.860 10:26:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:00.860 10:26:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.860 10:26:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.860 10:26:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.860 ************************************ 00:06:00.860 START TEST locking_overlapped_coremask 00:06:00.860 ************************************ 00:06:00.860 10:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:00.860 10:26:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72665 00:06:00.860 10:26:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72665 /var/tmp/spdk.sock 00:06:00.860 10:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72665 ']' 00:06:00.860 10:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.860 10:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.860 10:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.860 10:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.860 10:26:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:00.860 10:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.860 [2024-09-28 10:26:35.539448] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:00.860 [2024-09-28 10:26:35.539561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72665 ] 00:06:01.119 [2024-09-28 10:26:35.668907] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.119 [2024-09-28 10:26:35.687542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.119 [2024-09-28 10:26:35.729377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.119 [2024-09-28 10:26:35.729550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.119 [2024-09-28 10:26:35.729627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72683 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72683 /var/tmp/spdk2.sock 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72683 /var/tmp/spdk2.sock 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:01.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72683 /var/tmp/spdk2.sock 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72683 ']' 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.684 10:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.684 [2024-09-28 10:26:36.432992] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:01.684 [2024-09-28 10:26:36.433109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72683 ] 00:06:01.942 [2024-09-28 10:26:36.567358] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.942 [2024-09-28 10:26:36.584741] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72665 has claimed it. 00:06:01.942 [2024-09-28 10:26:36.584777] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:02.509 ERROR: process (pid: 72683) is no longer running 00:06:02.509 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72683) - No such process 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72665 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72665 ']' 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72665 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72665 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72665' 00:06:02.509 killing process with pid 72665 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72665 00:06:02.509 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72665 00:06:02.770 00:06:02.770 real 0m1.897s 00:06:02.770 user 0m5.181s 00:06:02.770 sys 0m0.408s 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.770 ************************************ 00:06:02.770 END TEST locking_overlapped_coremask 00:06:02.770 ************************************ 00:06:02.770 10:26:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:02.770 10:26:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.770 10:26:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.770 10:26:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.770 ************************************ 00:06:02.770 START TEST locking_overlapped_coremask_via_rpc 00:06:02.770 ************************************ 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72725 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72725 /var/tmp/spdk.sock 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72725 ']' 00:06:02.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.770 10:26:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.770 [2024-09-28 10:26:37.482303] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:02.770 [2024-09-28 10:26:37.482398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72725 ] 00:06:03.029 [2024-09-28 10:26:37.605618] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.029 [2024-09-28 10:26:37.623683] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.029 [2024-09-28 10:26:37.623826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.029 [2024-09-28 10:26:37.654547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.029 [2024-09-28 10:26:37.654663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.029 [2024-09-28 10:26:37.654730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72743 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72743 /var/tmp/spdk2.sock 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72743 ']' 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.595 10:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.864 [2024-09-28 10:26:38.393186] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:03.864 [2024-09-28 10:26:38.393317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72743 ] 00:06:03.864 [2024-09-28 10:26:38.524032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.864 [2024-09-28 10:26:38.547498] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.864 [2024-09-28 10:26:38.547541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.864 [2024-09-28 10:26:38.612800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.864 [2024-09-28 10:26:38.616057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.864 [2024-09-28 10:26:38.616126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.795 [2024-09-28 10:26:39.260070] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72725 has claimed it. 00:06:04.795 request: 00:06:04.795 { 00:06:04.795 "method": "framework_enable_cpumask_locks", 00:06:04.795 "req_id": 1 00:06:04.795 } 00:06:04.795 Got JSON-RPC error response 00:06:04.795 response: 00:06:04.795 { 00:06:04.795 "code": -32603, 00:06:04.795 "message": "Failed to claim CPU core: 2" 00:06:04.795 } 00:06:04.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72725 /var/tmp/spdk.sock 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72725 ']' 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72743 /var/tmp/spdk2.sock 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72743 ']' 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.795 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.053 ************************************ 00:06:05.053 END TEST locking_overlapped_coremask_via_rpc 00:06:05.053 ************************************ 00:06:05.053 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.053 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:05.053 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:05.053 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.053 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.053 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.053 00:06:05.053 real 0m2.254s 00:06:05.053 user 0m1.042s 00:06:05.053 sys 0m0.142s 00:06:05.053 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.053 10:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.053 10:26:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:05.053 10:26:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72725 ]] 00:06:05.053 10:26:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72725 00:06:05.053 10:26:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72725 ']' 00:06:05.053 10:26:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72725 00:06:05.053 10:26:39 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:05.053 10:26:39 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.053 10:26:39 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72725 00:06:05.053 killing process with pid 72725 00:06:05.053 10:26:39 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.053 10:26:39 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.053 10:26:39 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72725' 00:06:05.053 10:26:39 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72725 00:06:05.053 10:26:39 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72725 00:06:05.310 10:26:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72743 ]] 00:06:05.310 10:26:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72743 00:06:05.310 10:26:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72743 ']' 00:06:05.310 10:26:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72743 00:06:05.310 10:26:39 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:05.310 10:26:39 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.310 10:26:39 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72743 00:06:05.310 killing process with pid 72743 00:06:05.310 10:26:39 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:05.310 10:26:39 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:05.310 10:26:39 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72743' 00:06:05.310 10:26:39 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72743 00:06:05.311 10:26:39 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72743 00:06:05.569 10:26:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.569 Process with pid 72725 is not found 00:06:05.569 10:26:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:05.569 10:26:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72725 ]] 00:06:05.569 10:26:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72725 00:06:05.569 10:26:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72725 ']' 00:06:05.569 10:26:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72725 00:06:05.569 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72725) - No such process 00:06:05.569 10:26:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72725 is not found' 00:06:05.569 10:26:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72743 ]] 00:06:05.569 10:26:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72743 00:06:05.569 Process with pid 72743 is not found 00:06:05.569 10:26:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72743 ']' 00:06:05.569 10:26:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72743 00:06:05.569 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72743) - No such process 00:06:05.569 10:26:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72743 is not found' 00:06:05.569 10:26:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.569 ************************************ 00:06:05.569 END TEST cpu_locks 00:06:05.569 ************************************ 00:06:05.569 00:06:05.569 real 0m15.958s 00:06:05.569 user 0m28.093s 00:06:05.569 sys 0m4.185s 00:06:05.569 10:26:40 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.569 10:26:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.569 ************************************ 00:06:05.569 END TEST event 00:06:05.569 ************************************ 00:06:05.569 00:06:05.569 real 0m41.889s 00:06:05.569 user 1m21.406s 00:06:05.569 sys 0m7.003s 00:06:05.569 10:26:40 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.569 10:26:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.569 10:26:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:05.569 10:26:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.569 10:26:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.569 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:06:05.569 ************************************ 00:06:05.569 START TEST thread 00:06:05.569 ************************************ 00:06:05.569 10:26:40 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:05.827 * Looking for test storage... 00:06:05.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.827 10:26:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.827 10:26:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.827 10:26:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.827 10:26:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.827 10:26:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.827 10:26:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.827 10:26:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.827 10:26:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.827 10:26:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.827 10:26:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.827 10:26:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.827 10:26:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:05.827 10:26:40 thread -- scripts/common.sh@345 -- # : 1 00:06:05.827 10:26:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.827 10:26:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.827 10:26:40 thread -- scripts/common.sh@365 -- # decimal 1 00:06:05.827 10:26:40 thread -- scripts/common.sh@353 -- # local d=1 00:06:05.827 10:26:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.827 10:26:40 thread -- scripts/common.sh@355 -- # echo 1 00:06:05.827 10:26:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.827 10:26:40 thread -- scripts/common.sh@366 -- # decimal 2 00:06:05.827 10:26:40 thread -- scripts/common.sh@353 -- # local d=2 00:06:05.827 10:26:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.827 10:26:40 thread -- scripts/common.sh@355 -- # echo 2 00:06:05.827 10:26:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.827 10:26:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.827 10:26:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.827 10:26:40 thread -- scripts/common.sh@368 -- # return 0 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.827 --rc genhtml_branch_coverage=1 00:06:05.827 --rc genhtml_function_coverage=1 00:06:05.827 --rc genhtml_legend=1 00:06:05.827 --rc geninfo_all_blocks=1 00:06:05.827 --rc geninfo_unexecuted_blocks=1 00:06:05.827 00:06:05.827 ' 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.827 --rc genhtml_branch_coverage=1 00:06:05.827 --rc genhtml_function_coverage=1 00:06:05.827 --rc genhtml_legend=1 00:06:05.827 --rc geninfo_all_blocks=1 00:06:05.827 --rc geninfo_unexecuted_blocks=1 00:06:05.827 00:06:05.827 ' 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.827 --rc genhtml_branch_coverage=1 00:06:05.827 --rc genhtml_function_coverage=1 00:06:05.827 --rc genhtml_legend=1 00:06:05.827 --rc geninfo_all_blocks=1 00:06:05.827 --rc geninfo_unexecuted_blocks=1 00:06:05.827 00:06:05.827 ' 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.827 --rc genhtml_branch_coverage=1 00:06:05.827 --rc genhtml_function_coverage=1 00:06:05.827 --rc genhtml_legend=1 00:06:05.827 --rc geninfo_all_blocks=1 00:06:05.827 --rc geninfo_unexecuted_blocks=1 00:06:05.827 00:06:05.827 ' 00:06:05.827 10:26:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.827 10:26:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.827 ************************************ 00:06:05.827 START TEST thread_poller_perf 00:06:05.827 ************************************ 00:06:05.827 10:26:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.827 [2024-09-28 10:26:40.511198] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:05.828 [2024-09-28 10:26:40.511435] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72870 ] 00:06:06.085 [2024-09-28 10:26:40.639574] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.085 [2024-09-28 10:26:40.655908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.085 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.085 [2024-09-28 10:26:40.684713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.019 ====================================== 00:06:07.019 busy:2611602594 (cyc) 00:06:07.019 total_run_count: 412000 00:06:07.019 tsc_hz: 2600000000 (cyc) 00:06:07.019 ====================================== 00:06:07.019 poller_cost: 6338 (cyc), 2437 (nsec) 00:06:07.019 00:06:07.019 real 0m1.258s 00:06:07.019 user 0m1.084s 00:06:07.019 sys 0m0.068s 00:06:07.019 10:26:41 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.019 ************************************ 00:06:07.019 END TEST thread_poller_perf 00:06:07.019 ************************************ 00:06:07.019 10:26:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.019 10:26:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.019 10:26:41 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:07.019 10:26:41 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.019 10:26:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.019 ************************************ 00:06:07.019 START TEST thread_poller_perf 00:06:07.019 ************************************ 00:06:07.019 10:26:41 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.278 [2024-09-28 10:26:41.812243] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:07.278 [2024-09-28 10:26:41.812419] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72906 ] 00:06:07.278 [2024-09-28 10:26:41.934516] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.278 [2024-09-28 10:26:41.955262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.278 [2024-09-28 10:26:41.984317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.278 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:08.652 ====================================== 00:06:08.652 busy:2602599020 (cyc) 00:06:08.652 total_run_count: 5349000 00:06:08.652 tsc_hz: 2600000000 (cyc) 00:06:08.652 ====================================== 00:06:08.652 poller_cost: 486 (cyc), 186 (nsec) 00:06:08.652 00:06:08.652 real 0m1.252s 00:06:08.652 user 0m1.077s 00:06:08.652 sys 0m0.068s 00:06:08.652 10:26:43 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.652 10:26:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.652 ************************************ 00:06:08.652 END TEST thread_poller_perf 00:06:08.652 ************************************ 00:06:08.652 10:26:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:08.652 00:06:08.652 real 0m2.742s 00:06:08.652 user 0m2.275s 00:06:08.652 sys 0m0.255s 00:06:08.652 10:26:43 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.652 ************************************ 00:06:08.652 END TEST thread 00:06:08.652 ************************************ 00:06:08.652 10:26:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.652 10:26:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:08.652 10:26:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:08.652 10:26:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.652 10:26:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.652 10:26:43 -- common/autotest_common.sh@10 -- # set +x 00:06:08.652 ************************************ 00:06:08.652 START TEST app_cmdline 00:06:08.652 ************************************ 00:06:08.652 10:26:43 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:08.652 * Looking for test storage... 00:06:08.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:08.652 10:26:43 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.652 10:26:43 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.652 10:26:43 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.652 10:26:43 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.652 10:26:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:08.652 10:26:43 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.652 10:26:43 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.652 --rc genhtml_branch_coverage=1 00:06:08.652 --rc genhtml_function_coverage=1 00:06:08.652 --rc genhtml_legend=1 00:06:08.652 --rc geninfo_all_blocks=1 00:06:08.652 --rc geninfo_unexecuted_blocks=1 00:06:08.652 00:06:08.652 ' 00:06:08.652 10:26:43 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.652 --rc genhtml_branch_coverage=1 00:06:08.652 --rc genhtml_function_coverage=1 00:06:08.652 --rc genhtml_legend=1 00:06:08.652 --rc geninfo_all_blocks=1 00:06:08.652 --rc geninfo_unexecuted_blocks=1 00:06:08.652 00:06:08.652 ' 00:06:08.652 10:26:43 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.652 --rc genhtml_branch_coverage=1 00:06:08.652 --rc genhtml_function_coverage=1 00:06:08.652 --rc genhtml_legend=1 00:06:08.652 --rc geninfo_all_blocks=1 00:06:08.652 --rc geninfo_unexecuted_blocks=1 00:06:08.652 00:06:08.652 ' 00:06:08.652 10:26:43 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.652 --rc genhtml_branch_coverage=1 00:06:08.653 --rc genhtml_function_coverage=1 00:06:08.653 --rc genhtml_legend=1 00:06:08.653 --rc geninfo_all_blocks=1 00:06:08.653 --rc geninfo_unexecuted_blocks=1 00:06:08.653 00:06:08.653 ' 00:06:08.653 10:26:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:08.653 10:26:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72990 00:06:08.653 10:26:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72990 00:06:08.653 10:26:43 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72990 ']' 00:06:08.653 10:26:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:08.653 10:26:43 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.653 10:26:43 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.653 10:26:43 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.653 10:26:43 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.653 10:26:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.653 [2024-09-28 10:26:43.305137] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:08.653 [2024-09-28 10:26:43.305367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72990 ] 00:06:08.653 [2024-09-28 10:26:43.427860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.915 [2024-09-28 10:26:43.446779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.915 [2024-09-28 10:26:43.474830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.481 10:26:44 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.481 10:26:44 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:09.481 10:26:44 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:09.739 { 00:06:09.739 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:09.739 "fields": { 00:06:09.739 "major": 25, 00:06:09.739 "minor": 1, 00:06:09.739 "patch": 0, 00:06:09.739 "suffix": "-pre", 00:06:09.739 "commit": "09cc66129" 00:06:09.739 } 00:06:09.739 } 00:06:09.739 10:26:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:09.739 10:26:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:09.739 10:26:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:09.739 10:26:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:09.739 10:26:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.739 10:26:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.739 10:26:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.739 10:26:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:09.739 10:26:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:09.739 10:26:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:09.739 10:26:44 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.997 request: 00:06:09.997 { 00:06:09.997 "method": "env_dpdk_get_mem_stats", 00:06:09.997 "req_id": 1 00:06:09.997 } 00:06:09.997 Got JSON-RPC error response 00:06:09.997 response: 00:06:09.997 { 00:06:09.997 "code": -32601, 00:06:09.997 "message": "Method not found" 00:06:09.997 } 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.997 10:26:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72990 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72990 ']' 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72990 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72990 00:06:09.997 killing process with pid 72990 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72990' 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@969 -- # kill 72990 00:06:09.997 10:26:44 app_cmdline -- common/autotest_common.sh@974 -- # wait 72990 00:06:10.256 00:06:10.256 real 0m1.716s 00:06:10.256 user 0m2.089s 00:06:10.256 sys 0m0.356s 00:06:10.256 10:26:44 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.256 ************************************ 00:06:10.256 END TEST app_cmdline 00:06:10.256 ************************************ 00:06:10.256 10:26:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.256 10:26:44 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:10.256 10:26:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.256 10:26:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.256 10:26:44 -- common/autotest_common.sh@10 -- # set +x 00:06:10.256 ************************************ 00:06:10.256 START TEST version 00:06:10.256 ************************************ 00:06:10.256 10:26:44 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:10.256 * Looking for test storage... 00:06:10.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:10.256 10:26:44 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.256 10:26:44 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.256 10:26:44 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.256 10:26:44 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.256 10:26:44 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.256 10:26:44 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.256 10:26:44 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.256 10:26:44 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.256 10:26:44 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.256 10:26:44 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.256 10:26:44 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.256 10:26:44 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.256 10:26:44 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.256 10:26:44 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.256 10:26:44 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.256 10:26:44 version -- scripts/common.sh@344 -- # case "$op" in 00:06:10.256 10:26:44 version -- scripts/common.sh@345 -- # : 1 00:06:10.256 10:26:44 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.256 10:26:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.256 10:26:45 version -- scripts/common.sh@365 -- # decimal 1 00:06:10.256 10:26:45 version -- scripts/common.sh@353 -- # local d=1 00:06:10.256 10:26:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.256 10:26:45 version -- scripts/common.sh@355 -- # echo 1 00:06:10.256 10:26:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.256 10:26:45 version -- scripts/common.sh@366 -- # decimal 2 00:06:10.256 10:26:45 version -- scripts/common.sh@353 -- # local d=2 00:06:10.256 10:26:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.256 10:26:45 version -- scripts/common.sh@355 -- # echo 2 00:06:10.256 10:26:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.256 10:26:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.256 10:26:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.256 10:26:45 version -- scripts/common.sh@368 -- # return 0 00:06:10.256 10:26:45 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.256 10:26:45 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.256 --rc genhtml_branch_coverage=1 00:06:10.256 --rc genhtml_function_coverage=1 00:06:10.256 --rc genhtml_legend=1 00:06:10.256 --rc geninfo_all_blocks=1 00:06:10.256 --rc geninfo_unexecuted_blocks=1 00:06:10.256 00:06:10.256 ' 00:06:10.256 10:26:45 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.256 --rc genhtml_branch_coverage=1 00:06:10.256 --rc genhtml_function_coverage=1 00:06:10.256 --rc genhtml_legend=1 00:06:10.256 --rc geninfo_all_blocks=1 00:06:10.256 --rc geninfo_unexecuted_blocks=1 00:06:10.256 00:06:10.256 ' 00:06:10.256 10:26:45 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.256 --rc genhtml_branch_coverage=1 00:06:10.256 --rc genhtml_function_coverage=1 00:06:10.256 --rc genhtml_legend=1 00:06:10.256 --rc geninfo_all_blocks=1 00:06:10.256 --rc geninfo_unexecuted_blocks=1 00:06:10.256 00:06:10.256 ' 00:06:10.256 10:26:45 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.256 --rc genhtml_branch_coverage=1 00:06:10.256 --rc genhtml_function_coverage=1 00:06:10.256 --rc genhtml_legend=1 00:06:10.256 --rc geninfo_all_blocks=1 00:06:10.256 --rc geninfo_unexecuted_blocks=1 00:06:10.256 00:06:10.256 ' 00:06:10.256 10:26:45 version -- app/version.sh@17 -- # get_header_version major 00:06:10.256 10:26:45 version -- app/version.sh@14 -- # cut -f2 00:06:10.256 10:26:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.256 10:26:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.256 10:26:45 version -- app/version.sh@17 -- # major=25 00:06:10.256 10:26:45 version -- app/version.sh@18 -- # get_header_version minor 00:06:10.256 10:26:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.256 10:26:45 version -- app/version.sh@14 -- # cut -f2 00:06:10.256 10:26:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.256 10:26:45 version -- app/version.sh@18 -- # minor=1 00:06:10.256 10:26:45 version -- app/version.sh@19 -- # get_header_version patch 00:06:10.256 10:26:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.256 10:26:45 version -- app/version.sh@14 -- # cut -f2 00:06:10.256 10:26:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.256 10:26:45 version -- app/version.sh@19 -- # patch=0 00:06:10.256 10:26:45 version -- app/version.sh@20 -- # get_header_version suffix 00:06:10.256 10:26:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.256 10:26:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.256 10:26:45 version -- app/version.sh@14 -- # cut -f2 00:06:10.516 10:26:45 version -- app/version.sh@20 -- # suffix=-pre 00:06:10.516 10:26:45 version -- app/version.sh@22 -- # version=25.1 00:06:10.516 10:26:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:10.516 10:26:45 version -- app/version.sh@28 -- # version=25.1rc0 00:06:10.516 10:26:45 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:10.516 10:26:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:10.516 10:26:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:10.516 10:26:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:10.516 ************************************ 00:06:10.516 END TEST version 00:06:10.516 ************************************ 00:06:10.516 00:06:10.516 real 0m0.180s 00:06:10.516 user 0m0.113s 00:06:10.516 sys 0m0.091s 00:06:10.516 10:26:45 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.516 10:26:45 version -- common/autotest_common.sh@10 -- # set +x 00:06:10.516 10:26:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:10.516 10:26:45 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:10.516 10:26:45 -- spdk/autotest.sh@194 -- # uname -s 00:06:10.516 10:26:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:10.516 10:26:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:10.516 10:26:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:10.516 10:26:45 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:10.516 10:26:45 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:10.516 10:26:45 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:10.516 10:26:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.516 10:26:45 -- common/autotest_common.sh@10 -- # set +x 00:06:10.516 ************************************ 00:06:10.516 START TEST blockdev_nvme 00:06:10.516 ************************************ 00:06:10.516 10:26:45 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:10.516 * Looking for test storage... 00:06:10.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:10.516 10:26:45 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.516 10:26:45 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.516 10:26:45 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.516 10:26:45 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.516 10:26:45 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:10.516 10:26:45 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.516 10:26:45 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.517 --rc genhtml_branch_coverage=1 00:06:10.517 --rc genhtml_function_coverage=1 00:06:10.517 --rc genhtml_legend=1 00:06:10.517 --rc geninfo_all_blocks=1 00:06:10.517 --rc geninfo_unexecuted_blocks=1 00:06:10.517 00:06:10.517 ' 00:06:10.517 10:26:45 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.517 --rc genhtml_branch_coverage=1 00:06:10.517 --rc genhtml_function_coverage=1 00:06:10.517 --rc genhtml_legend=1 00:06:10.517 --rc geninfo_all_blocks=1 00:06:10.517 --rc geninfo_unexecuted_blocks=1 00:06:10.517 00:06:10.517 ' 00:06:10.517 10:26:45 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.517 --rc genhtml_branch_coverage=1 00:06:10.517 --rc genhtml_function_coverage=1 00:06:10.517 --rc genhtml_legend=1 00:06:10.517 --rc geninfo_all_blocks=1 00:06:10.517 --rc geninfo_unexecuted_blocks=1 00:06:10.517 00:06:10.517 ' 00:06:10.517 10:26:45 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.517 --rc genhtml_branch_coverage=1 00:06:10.517 --rc genhtml_function_coverage=1 00:06:10.517 --rc genhtml_legend=1 00:06:10.517 --rc geninfo_all_blocks=1 00:06:10.517 --rc geninfo_unexecuted_blocks=1 00:06:10.517 00:06:10.517 ' 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:10.517 10:26:45 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73151 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73151 00:06:10.517 10:26:45 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:10.517 10:26:45 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 73151 ']' 00:06:10.517 10:26:45 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.517 10:26:45 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.517 10:26:45 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.517 10:26:45 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.517 10:26:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.778 [2024-09-28 10:26:45.332694] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:10.778 [2024-09-28 10:26:45.332954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73151 ] 00:06:10.778 [2024-09-28 10:26:45.461046] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.778 [2024-09-28 10:26:45.481934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.778 [2024-09-28 10:26:45.515093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.349 10:26:46 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.349 10:26:46 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:11.610 10:26:46 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:11.610 10:26:46 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:11.610 10:26:46 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:11.610 10:26:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:11.610 10:26:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:11.610 10:26:46 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:11.610 10:26:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.610 10:26:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.872 10:26:46 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.872 10:26:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:11.872 10:26:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.872 10:26:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.872 10:26:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.872 10:26:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:11.872 10:26:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:11.872 10:26:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.872 10:26:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.872 10:26:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:11.873 10:26:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "cc295e09-862f-4ba8-b7e5-610e19603caf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cc295e09-862f-4ba8-b7e5-610e19603caf",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "aa40b5c3-d5a9-455c-bcd9-84f223f619b8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "aa40b5c3-d5a9-455c-bcd9-84f223f619b8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "21a397ba-2b1d-4ef4-9fa1-6315e64cd731"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "21a397ba-2b1d-4ef4-9fa1-6315e64cd731",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "476b5e90-213d-43a6-afef-cf3f1bfcdc82"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "476b5e90-213d-43a6-afef-cf3f1bfcdc82",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "470b222c-fe11-4f89-b6f4-f020bf211e66"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "470b222c-fe11-4f89-b6f4-f020bf211e66",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f0a82761-a795-41b8-9fae-1d677a0e772e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f0a82761-a795-41b8-9fae-1d677a0e772e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:11.873 10:26:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:11.873 10:26:46 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:11.873 10:26:46 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:11.873 10:26:46 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:11.873 10:26:46 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 73151 00:06:11.873 10:26:46 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 73151 ']' 00:06:11.873 10:26:46 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 73151 00:06:11.873 10:26:46 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:11.873 10:26:46 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.873 10:26:46 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73151 00:06:11.873 killing process with pid 73151 00:06:11.873 10:26:46 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.873 10:26:46 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.873 10:26:46 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73151' 00:06:11.873 10:26:46 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 73151 00:06:11.873 10:26:46 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 73151 00:06:12.133 10:26:46 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:12.133 10:26:46 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:12.133 10:26:46 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:12.133 10:26:46 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.133 10:26:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.133 ************************************ 00:06:12.133 START TEST bdev_hello_world 00:06:12.133 ************************************ 00:06:12.133 10:26:46 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:12.391 [2024-09-28 10:26:46.954276] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:12.391 [2024-09-28 10:26:46.954399] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73218 ] 00:06:12.391 [2024-09-28 10:26:47.082700] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.391 [2024-09-28 10:26:47.104919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.391 [2024-09-28 10:26:47.138144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.962 [2024-09-28 10:26:47.509067] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:12.962 [2024-09-28 10:26:47.509113] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:12.962 [2024-09-28 10:26:47.509135] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:12.963 [2024-09-28 10:26:47.511167] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:12.963 [2024-09-28 10:26:47.511523] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:12.963 [2024-09-28 10:26:47.511552] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:12.963 [2024-09-28 10:26:47.511674] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:12.963 00:06:12.963 [2024-09-28 10:26:47.511695] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:12.963 ************************************ 00:06:12.963 END TEST bdev_hello_world 00:06:12.963 ************************************ 00:06:12.963 00:06:12.963 real 0m0.771s 00:06:12.963 user 0m0.513s 00:06:12.963 sys 0m0.154s 00:06:12.963 10:26:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.963 10:26:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:12.963 10:26:47 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:12.963 10:26:47 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:12.963 10:26:47 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.963 10:26:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.963 ************************************ 00:06:12.963 START TEST bdev_bounds 00:06:12.963 ************************************ 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73244 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73244' 00:06:12.963 Process bdevio pid: 73244 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73244 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73244 ']' 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:12.963 10:26:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:13.223 [2024-09-28 10:26:47.773543] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:13.223 [2024-09-28 10:26:47.773791] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73244 ] 00:06:13.223 [2024-09-28 10:26:47.902727] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.223 [2024-09-28 10:26:47.922269] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.223 [2024-09-28 10:26:47.956438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.223 [2024-09-28 10:26:47.956716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.223 [2024-09-28 10:26:47.956730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.220 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.220 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:14.220 10:26:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:14.220 I/O targets: 00:06:14.220 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:14.220 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:14.220 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.220 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.220 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.220 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:14.220 00:06:14.220 00:06:14.220 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.220 http://cunit.sourceforge.net/ 00:06:14.220 00:06:14.220 00:06:14.220 Suite: bdevio tests on: Nvme3n1 00:06:14.220 Test: blockdev write read block ...passed 00:06:14.220 Test: blockdev write zeroes read block ...passed 00:06:14.220 Test: blockdev write zeroes read no split ...passed 00:06:14.220 Test: blockdev write zeroes read split ...passed 00:06:14.220 Test: blockdev write zeroes read split partial ...passed 00:06:14.220 Test: blockdev reset ...[2024-09-28 10:26:48.715681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:14.220 [2024-09-28 10:26:48.717379] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.220 passed 00:06:14.220 Test: blockdev write read 8 blocks ...passed 00:06:14.220 Test: blockdev write read size > 128k ...passed 00:06:14.220 Test: blockdev write read invalid size ...passed 00:06:14.220 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.220 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.220 Test: blockdev write read max offset ...passed 00:06:14.220 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.220 Test: blockdev writev readv 8 blocks ...passed 00:06:14.220 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.220 Test: blockdev writev readv block ...passed 00:06:14.220 Test: blockdev writev readv size > 128k ...passed 00:06:14.220 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.220 Test: blockdev comparev and writev ...[2024-09-28 10:26:48.727848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:14.220 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2be006000 len:0x1000 00:06:14.220 [2024-09-28 10:26:48.727998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.220 passed 00:06:14.220 Test: blockdev nvme passthru vendor specific ...[2024-09-28 10:26:48.729497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.220 [2024-09-28 10:26:48.729540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.220 passed 00:06:14.220 Test: blockdev nvme admin passthru ...passed 00:06:14.220 Test: blockdev copy ...passed 00:06:14.220 Suite: bdevio tests on: Nvme2n3 00:06:14.220 Test: blockdev write read block ...passed 00:06:14.220 Test: blockdev write zeroes read block ...passed 00:06:14.220 Test: blockdev write zeroes read no split ...passed 00:06:14.220 Test: blockdev write zeroes read split ...passed 00:06:14.220 Test: blockdev write zeroes read split partial ...passed 00:06:14.220 Test: blockdev reset ...[2024-09-28 10:26:48.750350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:14.220 passed 00:06:14.220 Test: blockdev write read 8 blocks ...[2024-09-28 10:26:48.752039] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.220 passed 00:06:14.220 Test: blockdev write read size > 128k ...passed 00:06:14.220 Test: blockdev write read invalid size ...passed 00:06:14.220 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.220 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.220 Test: blockdev write read max offset ...passed 00:06:14.220 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.220 Test: blockdev writev readv 8 blocks ...passed 00:06:14.220 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.220 Test: blockdev writev readv block ...passed 00:06:14.220 Test: blockdev writev readv size > 128k ...passed 00:06:14.220 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.220 Test: blockdev comparev and writev ...[2024-09-28 10:26:48.761239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:14.220 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d0c05000 len:0x1000 00:06:14.220 [2024-09-28 10:26:48.761367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.220 passed 00:06:14.220 Test: blockdev nvme passthru vendor specific ...[2024-09-28 10:26:48.762871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.220 [2024-09-28 10:26:48.762899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.220 passed 00:06:14.220 Test: blockdev nvme admin passthru ...passed 00:06:14.220 Test: blockdev copy ...passed 00:06:14.220 Suite: bdevio tests on: Nvme2n2 00:06:14.220 Test: blockdev write read block ...passed 00:06:14.220 Test: blockdev write zeroes read block ...passed 00:06:14.220 Test: blockdev write zeroes read no split ...passed 00:06:14.220 Test: blockdev write zeroes read split ...passed 00:06:14.220 Test: blockdev write zeroes read split partial ...passed 00:06:14.220 Test: blockdev reset ...[2024-09-28 10:26:48.786583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:14.220 passed 00:06:14.220 Test: blockdev write read 8 blocks ...[2024-09-28 10:26:48.788305] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.220 passed 00:06:14.220 Test: blockdev write read size > 128k ...passed 00:06:14.220 Test: blockdev write read invalid size ...passed 00:06:14.220 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.220 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.220 Test: blockdev write read max offset ...passed 00:06:14.220 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.220 Test: blockdev writev readv 8 blocks ...passed 00:06:14.220 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.220 Test: blockdev writev readv block ...passed 00:06:14.220 Test: blockdev writev readv size > 128k ...passed 00:06:14.220 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.220 Test: blockdev comparev and writev ...[2024-09-28 10:26:48.794260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1036000 len:0x1000 00:06:14.220 [2024-09-28 10:26:48.794299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.220 passed 00:06:14.220 Test: blockdev nvme passthru rw ...passed 00:06:14.220 Test: blockdev nvme passthru vendor specific ...[2024-09-28 10:26:48.794912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.220 passed 00:06:14.220 Test: blockdev nvme admin passthru ...[2024-09-28 10:26:48.794936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.220 passed 00:06:14.220 Test: blockdev copy ...passed 00:06:14.220 Suite: bdevio tests on: Nvme2n1 00:06:14.221 Test: blockdev write read block ...passed 00:06:14.221 Test: blockdev write zeroes read block ...passed 00:06:14.221 Test: blockdev write zeroes read no split ...passed 00:06:14.221 Test: blockdev write zeroes read split ...passed 00:06:14.221 Test: blockdev write zeroes read split partial ...passed 00:06:14.221 Test: blockdev reset ...[2024-09-28 10:26:48.807951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:14.221 passed 00:06:14.221 Test: blockdev write read 8 blocks ...[2024-09-28 10:26:48.809705] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.221 passed 00:06:14.221 Test: blockdev write read size > 128k ...passed 00:06:14.221 Test: blockdev write read invalid size ...passed 00:06:14.221 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.221 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.221 Test: blockdev write read max offset ...passed 00:06:14.221 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.221 Test: blockdev writev readv 8 blocks ...passed 00:06:14.221 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.221 Test: blockdev writev readv block ...passed 00:06:14.221 Test: blockdev writev readv size > 128k ...passed 00:06:14.221 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.221 Test: blockdev comparev and writev ...[2024-09-28 10:26:48.814215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1030000 len:0x1000 00:06:14.221 [2024-09-28 10:26:48.814251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.221 passed 00:06:14.221 Test: blockdev nvme passthru rw ...passed 00:06:14.221 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.221 Test: blockdev nvme admin passthru ...[2024-09-28 10:26:48.814824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.221 [2024-09-28 10:26:48.814849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.221 passed 00:06:14.221 Test: blockdev copy ...passed 00:06:14.221 Suite: bdevio tests on: Nvme1n1 00:06:14.221 Test: blockdev write read block ...passed 00:06:14.221 Test: blockdev write zeroes read block ...passed 00:06:14.221 Test: blockdev write zeroes read no split ...passed 00:06:14.221 Test: blockdev write zeroes read split ...passed 00:06:14.221 Test: blockdev write zeroes read split partial ...passed 00:06:14.221 Test: blockdev reset ...[2024-09-28 10:26:48.830079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:14.221 passed 00:06:14.221 Test: blockdev write read 8 blocks ...[2024-09-28 10:26:48.832053] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.221 passed 00:06:14.221 Test: blockdev write read size > 128k ...passed 00:06:14.221 Test: blockdev write read invalid size ...passed 00:06:14.221 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.221 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.221 Test: blockdev write read max offset ...passed 00:06:14.221 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.221 Test: blockdev writev readv 8 blocks ...passed 00:06:14.221 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.221 Test: blockdev writev readv block ...passed 00:06:14.221 Test: blockdev writev readv size > 128k ...passed 00:06:14.221 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.221 Test: blockdev comparev and writev ...[2024-09-28 10:26:48.842282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d102c000 len:0x1000 00:06:14.221 [2024-09-28 10:26:48.842316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.221 passed 00:06:14.221 Test: blockdev nvme passthru rw ...passed 00:06:14.221 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.221 Test: blockdev nvme admin passthru ...[2024-09-28 10:26:48.842919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.221 [2024-09-28 10:26:48.842946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.221 passed 00:06:14.221 Test: blockdev copy ...passed 00:06:14.221 Suite: bdevio tests on: Nvme0n1 00:06:14.221 Test: blockdev write read block ...passed 00:06:14.221 Test: blockdev write zeroes read block ...passed 00:06:14.221 Test: blockdev write zeroes read no split ...passed 00:06:14.221 Test: blockdev write zeroes read split ...passed 00:06:14.221 Test: blockdev write zeroes read split partial ...passed 00:06:14.221 Test: blockdev reset ...[2024-09-28 10:26:48.864219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:14.221 [2024-09-28 10:26:48.866531] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.221 passed 00:06:14.221 Test: blockdev write read 8 blocks ...passed 00:06:14.221 Test: blockdev write read size > 128k ...passed 00:06:14.221 Test: blockdev write read invalid size ...passed 00:06:14.221 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.221 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.221 Test: blockdev write read max offset ...passed 00:06:14.221 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.221 Test: blockdev writev readv 8 blocks ...passed 00:06:14.221 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.221 Test: blockdev writev readv block ...passed 00:06:14.221 Test: blockdev writev readv size > 128k ...passed 00:06:14.221 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.221 Test: blockdev comparev and writev ...passed 00:06:14.221 Test: blockdev nvme passthru rw ...[2024-09-28 10:26:48.879112] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:14.221 separate metadata which is not supported yet. 00:06:14.221 passed 00:06:14.221 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.221 Test: blockdev nvme admin passthru ...[2024-09-28 10:26:48.880976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:14.221 [2024-09-28 10:26:48.881011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:14.221 passed 00:06:14.221 Test: blockdev copy ...passed 00:06:14.221 00:06:14.221 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.221 suites 6 6 n/a 0 0 00:06:14.221 tests 138 138 138 0 0 00:06:14.221 asserts 893 893 893 0 n/a 00:06:14.221 00:06:14.221 Elapsed time = 0.424 seconds 00:06:14.221 0 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73244 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73244 ']' 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73244 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73244 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.221 killing process with pid 73244 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73244' 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73244 00:06:14.221 10:26:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73244 00:06:14.495 ************************************ 00:06:14.495 END TEST bdev_bounds 00:06:14.495 ************************************ 00:06:14.495 10:26:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:14.495 00:06:14.495 real 0m1.370s 00:06:14.495 user 0m3.432s 00:06:14.495 sys 0m0.270s 00:06:14.495 10:26:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.495 10:26:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:14.495 10:26:49 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:14.495 10:26:49 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:14.495 10:26:49 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.495 10:26:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.495 ************************************ 00:06:14.495 START TEST bdev_nbd 00:06:14.495 ************************************ 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:14.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73298 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73298 /var/tmp/spdk-nbd.sock 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73298 ']' 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:14.495 10:26:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:14.495 [2024-09-28 10:26:49.195654] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:14.495 [2024-09-28 10:26:49.195881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.754 [2024-09-28 10:26:49.324878] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.754 [2024-09-28 10:26:49.345727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.754 [2024-09-28 10:26:49.377861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.326 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.586 1+0 records in 00:06:15.586 1+0 records out 00:06:15.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335738 s, 12.2 MB/s 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.586 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.846 1+0 records in 00:06:15.846 1+0 records out 00:06:15.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463237 s, 8.8 MB/s 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.846 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.107 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.108 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.108 1+0 records in 00:06:16.108 1+0 records out 00:06:16.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294482 s, 13.9 MB/s 00:06:16.108 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.108 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:16.108 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.108 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.108 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:16.108 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.108 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.108 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.369 1+0 records in 00:06:16.369 1+0 records out 00:06:16.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473989 s, 8.6 MB/s 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.369 10:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.632 1+0 records in 00:06:16.632 1+0 records out 00:06:16.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344933 s, 11.9 MB/s 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.632 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.893 1+0 records in 00:06:16.893 1+0 records out 00:06:16.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870236 s, 4.7 MB/s 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.893 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:16.894 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.894 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.894 10:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:16.894 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.894 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.894 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.894 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd0", 00:06:16.894 "bdev_name": "Nvme0n1" 00:06:16.894 }, 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd1", 00:06:16.894 "bdev_name": "Nvme1n1" 00:06:16.894 }, 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd2", 00:06:16.894 "bdev_name": "Nvme2n1" 00:06:16.894 }, 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd3", 00:06:16.894 "bdev_name": "Nvme2n2" 00:06:16.894 }, 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd4", 00:06:16.894 "bdev_name": "Nvme2n3" 00:06:16.894 }, 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd5", 00:06:16.894 "bdev_name": "Nvme3n1" 00:06:16.894 } 00:06:16.894 ]' 00:06:16.894 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:16.894 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:16.894 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd0", 00:06:16.894 "bdev_name": "Nvme0n1" 00:06:16.894 }, 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd1", 00:06:16.894 "bdev_name": "Nvme1n1" 00:06:16.894 }, 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd2", 00:06:16.894 "bdev_name": "Nvme2n1" 00:06:16.894 }, 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd3", 00:06:16.894 "bdev_name": "Nvme2n2" 00:06:16.894 }, 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd4", 00:06:16.894 "bdev_name": "Nvme2n3" 00:06:16.894 }, 00:06:16.894 { 00:06:16.894 "nbd_device": "/dev/nbd5", 00:06:16.894 "bdev_name": "Nvme3n1" 00:06:16.894 } 00:06:16.894 ]' 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.154 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.155 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.155 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.155 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.155 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.155 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.155 10:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.415 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.415 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.415 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.415 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.415 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.415 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.415 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.415 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.415 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.415 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:17.677 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:17.677 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:17.677 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:17.677 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.677 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.677 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:17.677 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.677 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.677 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.677 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:17.935 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:17.935 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:17.935 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:17.935 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.935 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.935 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:17.935 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.935 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.935 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.935 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.193 10:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.452 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.452 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.452 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.452 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.453 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:18.713 /dev/nbd0 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.713 1+0 records in 00:06:18.713 1+0 records out 00:06:18.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050948 s, 8.0 MB/s 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.713 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:18.974 /dev/nbd1 00:06:18.974 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.975 1+0 records in 00:06:18.975 1+0 records out 00:06:18.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351528 s, 11.7 MB/s 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.975 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:19.236 /dev/nbd10 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.236 1+0 records in 00:06:19.236 1+0 records out 00:06:19.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000921149 s, 4.4 MB/s 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.236 10:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:19.497 /dev/nbd11 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.497 1+0 records in 00:06:19.497 1+0 records out 00:06:19.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110266 s, 3.7 MB/s 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.497 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:19.759 /dev/nbd12 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.759 1+0 records in 00:06:19.759 1+0 records out 00:06:19.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110381 s, 3.7 MB/s 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.759 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:19.759 /dev/nbd13 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.021 1+0 records in 00:06:20.021 1+0 records out 00:06:20.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000966363 s, 4.2 MB/s 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd0", 00:06:20.021 "bdev_name": "Nvme0n1" 00:06:20.021 }, 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd1", 00:06:20.021 "bdev_name": "Nvme1n1" 00:06:20.021 }, 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd10", 00:06:20.021 "bdev_name": "Nvme2n1" 00:06:20.021 }, 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd11", 00:06:20.021 "bdev_name": "Nvme2n2" 00:06:20.021 }, 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd12", 00:06:20.021 "bdev_name": "Nvme2n3" 00:06:20.021 }, 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd13", 00:06:20.021 "bdev_name": "Nvme3n1" 00:06:20.021 } 00:06:20.021 ]' 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd0", 00:06:20.021 "bdev_name": "Nvme0n1" 00:06:20.021 }, 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd1", 00:06:20.021 "bdev_name": "Nvme1n1" 00:06:20.021 }, 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd10", 00:06:20.021 "bdev_name": "Nvme2n1" 00:06:20.021 }, 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd11", 00:06:20.021 "bdev_name": "Nvme2n2" 00:06:20.021 }, 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd12", 00:06:20.021 "bdev_name": "Nvme2n3" 00:06:20.021 }, 00:06:20.021 { 00:06:20.021 "nbd_device": "/dev/nbd13", 00:06:20.021 "bdev_name": "Nvme3n1" 00:06:20.021 } 00:06:20.021 ]' 00:06:20.021 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.280 /dev/nbd1 00:06:20.280 /dev/nbd10 00:06:20.280 /dev/nbd11 00:06:20.280 /dev/nbd12 00:06:20.280 /dev/nbd13' 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.280 /dev/nbd1 00:06:20.280 /dev/nbd10 00:06:20.280 /dev/nbd11 00:06:20.280 /dev/nbd12 00:06:20.280 /dev/nbd13' 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:20.280 256+0 records in 00:06:20.280 256+0 records out 00:06:20.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00709719 s, 148 MB/s 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.280 256+0 records in 00:06:20.280 256+0 records out 00:06:20.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0868098 s, 12.1 MB/s 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.280 10:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.280 256+0 records in 00:06:20.280 256+0 records out 00:06:20.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0872304 s, 12.0 MB/s 00:06:20.280 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.280 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:20.538 256+0 records in 00:06:20.538 256+0 records out 00:06:20.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0645794 s, 16.2 MB/s 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:20.538 256+0 records in 00:06:20.538 256+0 records out 00:06:20.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0655477 s, 16.0 MB/s 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:20.538 256+0 records in 00:06:20.538 256+0 records out 00:06:20.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0764947 s, 13.7 MB/s 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:20.538 256+0 records in 00:06:20.538 256+0 records out 00:06:20.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0657831 s, 15.9 MB/s 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.538 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.797 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.056 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.056 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.056 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.056 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.056 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.056 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.056 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.056 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.056 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.056 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:21.314 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:21.314 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:21.314 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:21.314 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.314 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.314 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:21.314 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.314 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.314 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.314 10:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:21.572 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:21.572 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:21.572 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:21.572 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.572 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.572 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:21.572 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.572 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.572 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.572 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:21.867 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:21.868 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:21.868 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:21.868 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.868 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.868 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:21.868 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.868 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.868 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.868 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.868 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:22.132 10:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:22.393 malloc_lvol_verify 00:06:22.393 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:22.677 4e34b7ab-708d-4f72-a2ef-35d4726740a4 00:06:22.677 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:22.677 214d8868-2ea9-4588-abdc-d617a4ed4412 00:06:22.677 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:22.938 /dev/nbd0 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:22.938 mke2fs 1.47.0 (5-Feb-2023) 00:06:22.938 Discarding device blocks: 0/4096 done 00:06:22.938 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:22.938 00:06:22.938 Allocating group tables: 0/1 done 00:06:22.938 Writing inode tables: 0/1 done 00:06:22.938 Creating journal (1024 blocks): done 00:06:22.938 Writing superblocks and filesystem accounting information: 0/1 done 00:06:22.938 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.938 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73298 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73298 ']' 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73298 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73298 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.200 killing process with pid 73298 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73298' 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73298 00:06:23.200 10:26:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73298 00:06:23.461 10:26:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:23.461 00:06:23.461 real 0m8.940s 00:06:23.461 user 0m13.064s 00:06:23.461 sys 0m2.990s 00:06:23.462 ************************************ 00:06:23.462 END TEST bdev_nbd 00:06:23.462 ************************************ 00:06:23.462 10:26:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.462 10:26:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:23.462 10:26:58 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:23.462 10:26:58 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:23.462 skipping fio tests on NVMe due to multi-ns failures. 00:06:23.462 10:26:58 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:23.462 10:26:58 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.462 10:26:58 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.462 10:26:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:23.462 10:26:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.462 10:26:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.462 ************************************ 00:06:23.462 START TEST bdev_verify 00:06:23.462 ************************************ 00:06:23.462 10:26:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.462 [2024-09-28 10:26:58.191192] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:23.462 [2024-09-28 10:26:58.191307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73661 ] 00:06:23.723 [2024-09-28 10:26:58.320705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.723 [2024-09-28 10:26:58.341327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.723 [2024-09-28 10:26:58.374886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.723 [2024-09-28 10:26:58.374984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.296 Running I/O for 5 seconds... 00:06:29.148 19840.00 IOPS, 77.50 MiB/s 19744.00 IOPS, 77.12 MiB/s 19200.00 IOPS, 75.00 MiB/s 19536.00 IOPS, 76.31 MiB/s 19635.20 IOPS, 76.70 MiB/s 00:06:29.148 Latency(us) 00:06:29.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:29.148 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0x0 length 0xbd0bd 00:06:29.148 Nvme0n1 : 5.05 1623.48 6.34 0.00 0.00 78561.91 14518.74 89935.56 00:06:29.148 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:29.148 Nvme0n1 : 5.05 1596.46 6.24 0.00 0.00 79960.31 14720.39 93565.24 00:06:29.148 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0x0 length 0xa0000 00:06:29.148 Nvme1n1 : 5.05 1623.04 6.34 0.00 0.00 78423.35 17543.48 83886.08 00:06:29.148 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0xa0000 length 0xa0000 00:06:29.148 Nvme1n1 : 5.05 1595.99 6.23 0.00 0.00 79831.55 16131.94 89532.26 00:06:29.148 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0x0 length 0x80000 00:06:29.148 Nvme2n1 : 5.06 1630.11 6.37 0.00 0.00 77885.25 5016.02 78239.90 00:06:29.148 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0x80000 length 0x80000 00:06:29.148 Nvme2n1 : 5.05 1595.56 6.23 0.00 0.00 79702.00 14821.22 83886.08 00:06:29.148 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0x0 length 0x80000 00:06:29.148 Nvme2n2 : 5.08 1638.22 6.40 0.00 0.00 77406.59 11292.36 81062.99 00:06:29.148 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0x80000 length 0x80000 00:06:29.148 Nvme2n2 : 5.06 1595.11 6.23 0.00 0.00 79569.26 14518.74 79449.80 00:06:29.148 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0x0 length 0x80000 00:06:29.148 Nvme2n3 : 5.08 1637.79 6.40 0.00 0.00 77258.84 10939.47 86305.87 00:06:29.148 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0x80000 length 0x80000 00:06:29.148 Nvme2n3 : 5.07 1603.55 6.26 0.00 0.00 78982.74 3125.56 85499.27 00:06:29.148 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0x0 length 0x20000 00:06:29.148 Nvme3n1 : 5.08 1637.37 6.40 0.00 0.00 77126.09 10132.87 91952.05 00:06:29.148 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.148 Verification LBA range: start 0x20000 length 0x20000 00:06:29.148 Nvme3n1 : 5.08 1613.53 6.30 0.00 0.00 78402.51 5999.06 91952.05 00:06:29.148 =================================================================================================================== 00:06:29.148 Total : 19390.22 75.74 0.00 0.00 78580.50 3125.56 93565.24 00:06:29.719 00:06:29.719 real 0m6.256s 00:06:29.719 user 0m11.768s 00:06:29.719 sys 0m0.201s 00:06:29.719 ************************************ 00:06:29.719 10:27:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.719 10:27:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:29.719 END TEST bdev_verify 00:06:29.719 ************************************ 00:06:29.719 10:27:04 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:29.719 10:27:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:29.719 10:27:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.719 10:27:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.719 ************************************ 00:06:29.719 START TEST bdev_verify_big_io 00:06:29.719 ************************************ 00:06:29.719 10:27:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:29.979 [2024-09-28 10:27:04.531909] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:29.979 [2024-09-28 10:27:04.532072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73749 ] 00:06:29.979 [2024-09-28 10:27:04.665080] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.979 [2024-09-28 10:27:04.683011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.979 [2024-09-28 10:27:04.737733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.979 [2024-09-28 10:27:04.737829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.550 Running I/O for 5 seconds... 00:06:36.704 635.00 IOPS, 39.69 MiB/s 2169.50 IOPS, 135.59 MiB/s 2708.33 IOPS, 169.27 MiB/s 00:06:36.704 Latency(us) 00:06:36.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:36.704 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0x0 length 0xbd0b 00:06:36.704 Nvme0n1 : 5.68 112.66 7.04 0.00 0.00 1094826.14 31457.28 1142141.24 00:06:36.704 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:36.704 Nvme0n1 : 5.83 124.87 7.80 0.00 0.00 984315.54 24702.03 1129235.69 00:06:36.704 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0x0 length 0xa000 00:06:36.704 Nvme1n1 : 5.69 112.57 7.04 0.00 0.00 1062796.05 154866.61 922746.88 00:06:36.704 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0xa000 length 0xa000 00:06:36.704 Nvme1n1 : 5.83 116.08 7.25 0.00 0.00 1012895.40 52832.10 1458327.24 00:06:36.704 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0x0 length 0x8000 00:06:36.704 Nvme2n1 : 5.79 113.70 7.11 0.00 0.00 1018144.74 98808.12 942105.21 00:06:36.704 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0x8000 length 0x8000 00:06:36.704 Nvme2n1 : 5.84 122.00 7.62 0.00 0.00 938286.90 66544.25 1464780.01 00:06:36.704 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0x0 length 0x8000 00:06:36.704 Nvme2n2 : 5.83 120.79 7.55 0.00 0.00 939945.93 37708.41 1006632.96 00:06:36.704 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0x8000 length 0x8000 00:06:36.704 Nvme2n2 : 5.90 123.30 7.71 0.00 0.00 899669.95 64931.05 1503496.66 00:06:36.704 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0x0 length 0x8000 00:06:36.704 Nvme2n3 : 5.90 127.23 7.95 0.00 0.00 862913.11 15829.46 1084066.26 00:06:36.704 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0x8000 length 0x8000 00:06:36.704 Nvme2n3 : 5.99 137.32 8.58 0.00 0.00 786887.98 18854.20 1535760.54 00:06:36.704 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0x0 length 0x2000 00:06:36.704 Nvme3n1 : 5.96 146.11 9.13 0.00 0.00 732711.72 1260.31 1187310.67 00:06:36.704 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.704 Verification LBA range: start 0x2000 length 0x2000 00:06:36.704 Nvme3n1 : 6.02 164.99 10.31 0.00 0.00 638723.09 1121.67 1445421.69 00:06:36.704 =================================================================================================================== 00:06:36.704 Total : 1521.61 95.10 0.00 0.00 897039.99 1121.67 1535760.54 00:06:39.256 00:06:39.256 real 0m9.282s 00:06:39.256 user 0m17.650s 00:06:39.256 sys 0m0.335s 00:06:39.256 10:27:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.256 ************************************ 00:06:39.256 END TEST bdev_verify_big_io 00:06:39.256 ************************************ 00:06:39.256 10:27:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:39.256 10:27:13 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.256 10:27:13 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:39.256 10:27:13 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.256 10:27:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.256 ************************************ 00:06:39.256 START TEST bdev_write_zeroes 00:06:39.256 ************************************ 00:06:39.256 10:27:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.256 [2024-09-28 10:27:13.843048] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:39.256 [2024-09-28 10:27:13.843164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73854 ] 00:06:39.256 [2024-09-28 10:27:13.972610] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.256 [2024-09-28 10:27:13.991893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.256 [2024-09-28 10:27:14.026614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.824 Running I/O for 1 seconds... 00:06:40.762 59904.00 IOPS, 234.00 MiB/s 00:06:40.762 Latency(us) 00:06:40.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.762 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.762 Nvme0n1 : 1.03 9916.62 38.74 0.00 0.00 12852.82 4889.99 30045.74 00:06:40.762 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.762 Nvme1n1 : 1.03 9889.61 38.63 0.00 0.00 12866.11 9275.86 29440.79 00:06:40.762 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.762 Nvme2n1 : 1.03 9863.14 38.53 0.00 0.00 12860.32 9225.45 29037.49 00:06:40.762 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.762 Nvme2n2 : 1.03 9836.98 38.43 0.00 0.00 12846.17 9275.86 28029.24 00:06:40.762 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.762 Nvme2n3 : 1.04 9815.51 38.34 0.00 0.00 12821.55 8721.33 25306.98 00:06:40.762 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.762 Nvme3n1 : 1.04 9804.50 38.30 0.00 0.00 12786.19 9074.22 21878.94 00:06:40.762 =================================================================================================================== 00:06:40.762 Total : 59126.35 230.96 0.00 0.00 12838.86 4889.99 30045.74 00:06:41.024 00:06:41.024 real 0m1.908s 00:06:41.024 user 0m1.615s 00:06:41.024 sys 0m0.183s 00:06:41.024 10:27:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.024 ************************************ 00:06:41.024 END TEST bdev_write_zeroes 00:06:41.024 ************************************ 00:06:41.024 10:27:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:41.024 10:27:15 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.024 10:27:15 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:41.024 10:27:15 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.024 10:27:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.024 ************************************ 00:06:41.024 START TEST bdev_json_nonenclosed 00:06:41.024 ************************************ 00:06:41.024 10:27:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.286 [2024-09-28 10:27:15.823830] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:41.286 [2024-09-28 10:27:15.823981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73896 ] 00:06:41.286 [2024-09-28 10:27:15.956074] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.286 [2024-09-28 10:27:15.974798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.286 [2024-09-28 10:27:16.027248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.286 [2024-09-28 10:27:16.027371] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:41.286 [2024-09-28 10:27:16.027395] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:41.286 [2024-09-28 10:27:16.027406] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.548 00:06:41.548 real 0m0.380s 00:06:41.548 user 0m0.162s 00:06:41.548 sys 0m0.114s 00:06:41.548 10:27:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.548 ************************************ 00:06:41.548 END TEST bdev_json_nonenclosed 00:06:41.548 ************************************ 00:06:41.548 10:27:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:41.548 10:27:16 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.548 10:27:16 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:41.548 10:27:16 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.548 10:27:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.548 ************************************ 00:06:41.548 START TEST bdev_json_nonarray 00:06:41.548 ************************************ 00:06:41.548 10:27:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.548 [2024-09-28 10:27:16.273060] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:41.548 [2024-09-28 10:27:16.273260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73916 ] 00:06:41.809 [2024-09-28 10:27:16.406667] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.809 [2024-09-28 10:27:16.429714] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.809 [2024-09-28 10:27:16.481557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.809 [2024-09-28 10:27:16.481679] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:41.809 [2024-09-28 10:27:16.481703] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:41.809 [2024-09-28 10:27:16.481717] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.071 00:06:42.071 real 0m0.391s 00:06:42.071 user 0m0.162s 00:06:42.071 sys 0m0.123s 00:06:42.071 10:27:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.071 10:27:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:42.071 ************************************ 00:06:42.071 END TEST bdev_json_nonarray 00:06:42.071 ************************************ 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:42.071 10:27:16 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:42.071 00:06:42.071 real 0m31.534s 00:06:42.071 user 0m50.278s 00:06:42.071 sys 0m5.035s 00:06:42.071 10:27:16 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.071 ************************************ 00:06:42.071 END TEST blockdev_nvme 00:06:42.071 ************************************ 00:06:42.071 10:27:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.071 10:27:16 -- spdk/autotest.sh@209 -- # uname -s 00:06:42.071 10:27:16 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:42.071 10:27:16 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:42.071 10:27:16 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:42.071 10:27:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.071 10:27:16 -- common/autotest_common.sh@10 -- # set +x 00:06:42.071 ************************************ 00:06:42.071 START TEST blockdev_nvme_gpt 00:06:42.071 ************************************ 00:06:42.071 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:42.071 * Looking for test storage... 00:06:42.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:42.071 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:42.071 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:06:42.071 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:42.333 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.333 10:27:16 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:42.333 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.333 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:42.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.333 --rc genhtml_branch_coverage=1 00:06:42.333 --rc genhtml_function_coverage=1 00:06:42.333 --rc genhtml_legend=1 00:06:42.333 --rc geninfo_all_blocks=1 00:06:42.333 --rc geninfo_unexecuted_blocks=1 00:06:42.333 00:06:42.333 ' 00:06:42.333 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:42.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.333 --rc genhtml_branch_coverage=1 00:06:42.333 --rc genhtml_function_coverage=1 00:06:42.333 --rc genhtml_legend=1 00:06:42.333 --rc geninfo_all_blocks=1 00:06:42.333 --rc geninfo_unexecuted_blocks=1 00:06:42.333 00:06:42.333 ' 00:06:42.333 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:42.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.333 --rc genhtml_branch_coverage=1 00:06:42.333 --rc genhtml_function_coverage=1 00:06:42.333 --rc genhtml_legend=1 00:06:42.333 --rc geninfo_all_blocks=1 00:06:42.333 --rc geninfo_unexecuted_blocks=1 00:06:42.333 00:06:42.333 ' 00:06:42.333 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:42.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.333 --rc genhtml_branch_coverage=1 00:06:42.333 --rc genhtml_function_coverage=1 00:06:42.333 --rc genhtml_legend=1 00:06:42.333 --rc geninfo_all_blocks=1 00:06:42.333 --rc geninfo_unexecuted_blocks=1 00:06:42.333 00:06:42.333 ' 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:42.333 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:42.334 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74000 00:06:42.334 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:42.334 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74000 00:06:42.334 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 74000 ']' 00:06:42.334 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.334 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.334 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.334 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.334 10:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:42.334 10:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:42.334 [2024-09-28 10:27:16.966100] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:42.334 [2024-09-28 10:27:16.966255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74000 ] 00:06:42.334 [2024-09-28 10:27:17.098946] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.596 [2024-09-28 10:27:17.121108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.596 [2024-09-28 10:27:17.172845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.166 10:27:17 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.166 10:27:17 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:43.166 10:27:17 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:43.166 10:27:17 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:43.166 10:27:17 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:43.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.684 Waiting for block devices as requested 00:06:43.684 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:43.684 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:43.945 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:43.945 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:49.303 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:49.303 BYT; 00:06:49.303 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:49.303 BYT; 00:06:49.303 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:49.303 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:49.304 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:49.304 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:49.304 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:49.304 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:49.304 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:49.304 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:49.304 10:27:23 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:49.304 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:49.304 10:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:50.238 The operation has completed successfully. 00:06:50.238 10:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:51.171 The operation has completed successfully. 00:06:51.171 10:27:25 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:51.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:51.997 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.997 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.997 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.997 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.997 10:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:51.997 10:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.997 10:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.997 [] 00:06:51.997 10:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.997 10:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:51.997 10:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:51.997 10:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:51.997 10:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:51.997 10:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:51.997 10:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.997 10:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.258 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.258 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:52.258 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.258 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.258 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.258 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:52.258 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:52.258 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.258 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.258 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.258 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:52.258 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.258 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.520 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.520 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:52.520 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.520 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.520 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.520 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:52.520 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:52.520 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:52.520 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.520 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.520 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.520 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:52.520 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:52.521 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6bbfb8bc-af86-4a70-8dfe-edd514de87da"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6bbfb8bc-af86-4a70-8dfe-edd514de87da",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9a080d5d-a277-49e3-bb6a-c27f13597ed0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9a080d5d-a277-49e3-bb6a-c27f13597ed0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d8087e5f-5452-4227-a439-cb29d670483d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d8087e5f-5452-4227-a439-cb29d670483d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "75b027f3-22f8-47fc-b615-766c33b340d2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "75b027f3-22f8-47fc-b615-766c33b340d2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3aadaed6-4004-44cc-9d5f-8ee177a21716"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3aadaed6-4004-44cc-9d5f-8ee177a21716",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:52.521 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:52.521 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:52.521 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:52.521 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 74000 00:06:52.521 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 74000 ']' 00:06:52.521 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 74000 00:06:52.521 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:06:52.521 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.521 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74000 00:06:52.521 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.521 killing process with pid 74000 00:06:52.521 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.521 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74000' 00:06:52.521 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 74000 00:06:52.521 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 74000 00:06:52.782 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:52.782 10:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:52.782 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:52.782 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.782 10:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.782 ************************************ 00:06:52.782 START TEST bdev_hello_world 00:06:52.782 ************************************ 00:06:52.782 10:27:27 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:52.782 [2024-09-28 10:27:27.545002] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:52.782 [2024-09-28 10:27:27.545118] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74608 ] 00:06:53.041 [2024-09-28 10:27:27.672828] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.041 [2024-09-28 10:27:27.691019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.041 [2024-09-28 10:27:27.725330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.609 [2024-09-28 10:27:28.095921] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:53.609 [2024-09-28 10:27:28.095981] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:53.609 [2024-09-28 10:27:28.096004] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:53.609 [2024-09-28 10:27:28.098134] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:53.609 [2024-09-28 10:27:28.098537] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:53.609 [2024-09-28 10:27:28.098565] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:53.609 [2024-09-28 10:27:28.098712] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:53.609 00:06:53.609 [2024-09-28 10:27:28.098738] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:53.609 00:06:53.609 real 0m0.780s 00:06:53.609 user 0m0.505s 00:06:53.609 sys 0m0.171s 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:53.609 ************************************ 00:06:53.609 END TEST bdev_hello_world 00:06:53.609 ************************************ 00:06:53.609 10:27:28 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:53.609 10:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.609 10:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.609 10:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.609 ************************************ 00:06:53.609 START TEST bdev_bounds 00:06:53.609 ************************************ 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74639 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.609 Process bdevio pid: 74639 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74639' 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74639 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 74639 ']' 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.609 10:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:53.869 [2024-09-28 10:27:28.395284] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:53.869 [2024-09-28 10:27:28.395752] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74639 ] 00:06:53.869 [2024-09-28 10:27:28.526463] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.869 [2024-09-28 10:27:28.546390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.869 [2024-09-28 10:27:28.583493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.869 [2024-09-28 10:27:28.583780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.869 [2024-09-28 10:27:28.583845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.808 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.808 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:54.808 10:27:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:54.808 I/O targets: 00:06:54.808 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:54.808 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:54.808 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:54.808 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:54.808 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:54.808 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:54.808 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:54.808 00:06:54.808 00:06:54.808 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.808 http://cunit.sourceforge.net/ 00:06:54.808 00:06:54.808 00:06:54.808 Suite: bdevio tests on: Nvme3n1 00:06:54.808 Test: blockdev write read block ...passed 00:06:54.808 Test: blockdev write zeroes read block ...passed 00:06:54.808 Test: blockdev write zeroes read no split ...passed 00:06:54.808 Test: blockdev write zeroes read split ...passed 00:06:54.808 Test: blockdev write zeroes read split partial ...passed 00:06:54.808 Test: blockdev reset ...[2024-09-28 10:27:29.367844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:54.808 [2024-09-28 10:27:29.372419] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:54.808 passed 00:06:54.808 Test: blockdev write read 8 blocks ...passed 00:06:54.808 Test: blockdev write read size > 128k ...passed 00:06:54.808 Test: blockdev write read invalid size ...passed 00:06:54.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.808 Test: blockdev write read max offset ...passed 00:06:54.808 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.808 Test: blockdev writev readv 8 blocks ...passed 00:06:54.808 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.808 Test: blockdev writev readv block ...passed 00:06:54.808 Test: blockdev writev readv size > 128k ...passed 00:06:54.808 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.808 Test: blockdev comparev and writev ...[2024-09-28 10:27:29.390851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9a0e000 len:0x1000 00:06:54.808 [2024-09-28 10:27:29.391007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.808 passed 00:06:54.808 Test: blockdev nvme passthru rw ...passed 00:06:54.808 Test: blockdev nvme passthru vendor specific ...[2024-09-28 10:27:29.393110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.808 [2024-09-28 10:27:29.393197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.808 passed 00:06:54.808 Test: blockdev nvme admin passthru ...passed 00:06:54.808 Test: blockdev copy ...passed 00:06:54.808 Suite: bdevio tests on: Nvme2n3 00:06:54.808 Test: blockdev write read block ...passed 00:06:54.808 Test: blockdev write zeroes read block ...passed 00:06:54.808 Test: blockdev write zeroes read no split ...passed 00:06:54.808 Test: blockdev write zeroes read split ...passed 00:06:54.808 Test: blockdev write zeroes read split partial ...passed 00:06:54.808 Test: blockdev reset ...[2024-09-28 10:27:29.421298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:54.808 [2024-09-28 10:27:29.424223] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:54.808 passed 00:06:54.808 Test: blockdev write read 8 blocks ...passed 00:06:54.808 Test: blockdev write read size > 128k ...passed 00:06:54.808 Test: blockdev write read invalid size ...passed 00:06:54.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.808 Test: blockdev write read max offset ...passed 00:06:54.808 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.808 Test: blockdev writev readv 8 blocks ...passed 00:06:54.808 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.808 Test: blockdev writev readv block ...passed 00:06:54.808 Test: blockdev writev readv size > 128k ...passed 00:06:54.808 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.808 Test: blockdev comparev and writev ...[2024-09-28 10:27:29.442479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9a0a000 len:0x1000 00:06:54.808 passed 00:06:54.808 Test: blockdev nvme passthru rw ...[2024-09-28 10:27:29.442676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.808 passed 00:06:54.808 Test: blockdev nvme passthru vendor specific ...[2024-09-28 10:27:29.444811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:54.808 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:54.808 [2024-09-28 10:27:29.444995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.808 passed 00:06:54.808 Test: blockdev copy ...passed 00:06:54.808 Suite: bdevio tests on: Nvme2n2 00:06:54.808 Test: blockdev write read block ...passed 00:06:54.808 Test: blockdev write zeroes read block ...passed 00:06:54.808 Test: blockdev write zeroes read no split ...passed 00:06:54.808 Test: blockdev write zeroes read split ...passed 00:06:54.808 Test: blockdev write zeroes read split partial ...passed 00:06:54.808 Test: blockdev reset ...[2024-09-28 10:27:29.472507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:54.808 passed 00:06:54.808 Test: blockdev write read 8 blocks ...[2024-09-28 10:27:29.474780] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:54.808 passed 00:06:54.808 Test: blockdev write read size > 128k ...passed 00:06:54.808 Test: blockdev write read invalid size ...passed 00:06:54.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.809 Test: blockdev write read max offset ...passed 00:06:54.809 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.809 Test: blockdev writev readv 8 blocks ...passed 00:06:54.809 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.809 Test: blockdev writev readv block ...passed 00:06:54.809 Test: blockdev writev readv size > 128k ...passed 00:06:54.809 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.809 Test: blockdev comparev and writev ...[2024-09-28 10:27:29.491438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc005000 len:0x1000 00:06:54.809 [2024-09-28 10:27:29.491609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.809 passed 00:06:54.809 Test: blockdev nvme passthru rw ...passed 00:06:54.809 Test: blockdev nvme passthru vendor specific ...[2024-09-28 10:27:29.494236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.809 passed 00:06:54.809 Test: blockdev nvme admin passthru ...[2024-09-28 10:27:29.494373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.809 passed 00:06:54.809 Test: blockdev copy ...passed 00:06:54.809 Suite: bdevio tests on: Nvme2n1 00:06:54.809 Test: blockdev write read block ...passed 00:06:54.809 Test: blockdev write zeroes read block ...passed 00:06:54.809 Test: blockdev write zeroes read no split ...passed 00:06:54.809 Test: blockdev write zeroes read split ...passed 00:06:54.809 Test: blockdev write zeroes read split partial ...passed 00:06:54.809 Test: blockdev reset ...[2024-09-28 10:27:29.524873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:54.809 [2024-09-28 10:27:29.527224] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:54.809 passed 00:06:54.809 Test: blockdev write read 8 blocks ...passed 00:06:54.809 Test: blockdev write read size > 128k ...passed 00:06:54.809 Test: blockdev write read invalid size ...passed 00:06:54.809 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.809 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.809 Test: blockdev write read max offset ...passed 00:06:54.809 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.809 Test: blockdev writev readv 8 blocks ...passed 00:06:54.809 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.809 Test: blockdev writev readv block ...passed 00:06:54.809 Test: blockdev writev readv size > 128k ...passed 00:06:54.809 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.809 Test: blockdev comparev and writev ...[2024-09-28 10:27:29.540790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd202000 len:0x1000 00:06:54.809 passed 00:06:54.809 Test: blockdev nvme passthru rw ...passed 00:06:54.809 Test: blockdev nvme passthru vendor specific ...[2024-09-28 10:27:29.540977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.809 [2024-09-28 10:27:29.541482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.809 [2024-09-28 10:27:29.541622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.809 passed 00:06:54.809 Test: blockdev nvme admin passthru ...passed 00:06:54.809 Test: blockdev copy ...passed 00:06:54.809 Suite: bdevio tests on: Nvme1n1p2 00:06:54.809 Test: blockdev write read block ...passed 00:06:54.809 Test: blockdev write zeroes read block ...passed 00:06:54.809 Test: blockdev write zeroes read no split ...passed 00:06:54.809 Test: blockdev write zeroes read split ...passed 00:06:54.809 Test: blockdev write zeroes read split partial ...passed 00:06:54.809 Test: blockdev reset ...[2024-09-28 10:27:29.564553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:54.809 passed 00:06:54.809 Test: blockdev write read 8 blocks ...[2024-09-28 10:27:29.566499] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:54.809 passed 00:06:54.809 Test: blockdev write read size > 128k ...passed 00:06:54.809 Test: blockdev write read invalid size ...passed 00:06:54.809 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.809 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.809 Test: blockdev write read max offset ...passed 00:06:54.809 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.809 Test: blockdev writev readv 8 blocks ...passed 00:06:54.809 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.809 Test: blockdev writev readv block ...passed 00:06:54.809 Test: blockdev writev readv size > 128k ...passed 00:06:54.809 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.809 Test: blockdev comparev and writev ...[2024-09-28 10:27:29.581888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:06:54.809 Test: blockdev nvme passthru rw ...passed 00:06:54.809 Test: blockdev nvme passthru vendor specific ...passed 00:06:54.809 Test: blockdev nvme admin passthru ...passed 00:06:54.809 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2cfa3b000 len:0x1000 00:06:54.809 [2024-09-28 10:27:29.582079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.071 passed 00:06:55.071 Suite: bdevio tests on: Nvme1n1p1 00:06:55.071 Test: blockdev write read block ...passed 00:06:55.071 Test: blockdev write zeroes read block ...passed 00:06:55.071 Test: blockdev write zeroes read no split ...passed 00:06:55.071 Test: blockdev write zeroes read split ...passed 00:06:55.071 Test: blockdev write zeroes read split partial ...passed 00:06:55.071 Test: blockdev reset ...[2024-09-28 10:27:29.601916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:55.071 passed 00:06:55.071 Test: blockdev write read 8 blocks ...[2024-09-28 10:27:29.605813] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:55.071 passed 00:06:55.071 Test: blockdev write read size > 128k ...passed 00:06:55.071 Test: blockdev write read invalid size ...passed 00:06:55.071 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.071 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.071 Test: blockdev write read max offset ...passed 00:06:55.071 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.071 Test: blockdev writev readv 8 blocks ...passed 00:06:55.071 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.071 Test: blockdev writev readv block ...passed 00:06:55.071 Test: blockdev writev readv size > 128k ...passed 00:06:55.071 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.071 Test: blockdev comparev and writev ...[2024-09-28 10:27:29.623999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:06:55.071 Test: blockdev nvme passthru rw ...passed 00:06:55.071 Test: blockdev nvme passthru vendor specific ...passed 00:06:55.071 Test: blockdev nvme admin passthru ...passed 00:06:55.071 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2cfa37000 len:0x1000 00:06:55.071 [2024-09-28 10:27:29.624201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.071 passed 00:06:55.071 Suite: bdevio tests on: Nvme0n1 00:06:55.071 Test: blockdev write read block ...passed 00:06:55.071 Test: blockdev write zeroes read block ...passed 00:06:55.071 Test: blockdev write zeroes read no split ...passed 00:06:55.071 Test: blockdev write zeroes read split ...passed 00:06:55.071 Test: blockdev write zeroes read split partial ...passed 00:06:55.071 Test: blockdev reset ...[2024-09-28 10:27:29.644237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:55.071 passed 00:06:55.071 Test: blockdev write read 8 blocks ...[2024-09-28 10:27:29.647537] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:55.071 passed 00:06:55.071 Test: blockdev write read size > 128k ...passed 00:06:55.071 Test: blockdev write read invalid size ...passed 00:06:55.071 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.071 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.071 Test: blockdev write read max offset ...passed 00:06:55.071 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.071 Test: blockdev writev readv 8 blocks ...passed 00:06:55.071 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.071 Test: blockdev writev readv block ...passed 00:06:55.071 Test: blockdev writev readv size > 128k ...passed 00:06:55.071 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.071 Test: blockdev comparev and writev ...passed 00:06:55.071 Test: blockdev nvme passthru rw ...[2024-09-28 10:27:29.655488] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:55.071 separate metadata which is not supported yet. 00:06:55.071 passed 00:06:55.071 Test: blockdev nvme passthru vendor specific ...[2024-09-28 10:27:29.657201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:55.071 [2024-09-28 10:27:29.657473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:55.071 passed 00:06:55.071 Test: blockdev nvme admin passthru ...passed 00:06:55.071 Test: blockdev copy ...passed 00:06:55.071 00:06:55.071 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.071 suites 7 7 n/a 0 0 00:06:55.071 tests 161 161 161 0 0 00:06:55.071 asserts 1025 1025 1025 0 n/a 00:06:55.071 00:06:55.071 Elapsed time = 0.704 seconds 00:06:55.071 0 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74639 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 74639 ']' 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 74639 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74639 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74639' 00:06:55.071 killing process with pid 74639 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 74639 00:06:55.071 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 74639 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:55.332 00:06:55.332 real 0m1.538s 00:06:55.332 user 0m3.807s 00:06:55.332 sys 0m0.281s 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:55.332 ************************************ 00:06:55.332 END TEST bdev_bounds 00:06:55.332 ************************************ 00:06:55.332 10:27:29 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:55.332 10:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:55.332 10:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.332 10:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.332 ************************************ 00:06:55.332 START TEST bdev_nbd 00:06:55.332 ************************************ 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:55.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74688 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74688 /var/tmp/spdk-nbd.sock 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 74688 ']' 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:55.332 10:27:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:55.332 [2024-09-28 10:27:30.014794] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:55.332 [2024-09-28 10:27:30.015053] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.590 [2024-09-28 10:27:30.145406] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.590 [2024-09-28 10:27:30.164104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.590 [2024-09-28 10:27:30.195970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:56.155 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:56.156 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.156 10:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.413 1+0 records in 00:06:56.413 1+0 records out 00:06:56.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594425 s, 6.9 MB/s 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.413 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.671 1+0 records in 00:06:56.671 1+0 records out 00:06:56.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689301 s, 5.9 MB/s 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.671 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.931 1+0 records in 00:06:56.931 1+0 records out 00:06:56.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126603 s, 3.2 MB/s 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.931 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.932 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.192 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.192 1+0 records in 00:06:57.192 1+0 records out 00:06:57.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885771 s, 4.6 MB/s 00:06:57.193 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.193 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.193 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.193 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.193 10:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.193 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.193 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.193 10:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.454 1+0 records in 00:06:57.454 1+0 records out 00:06:57.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105504 s, 3.9 MB/s 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.454 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.715 1+0 records in 00:06:57.715 1+0 records out 00:06:57.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000877346 s, 4.7 MB/s 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.715 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.977 1+0 records in 00:06:57.977 1+0 records out 00:06:57.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654493 s, 6.3 MB/s 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd0", 00:06:57.977 "bdev_name": "Nvme0n1" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd1", 00:06:57.977 "bdev_name": "Nvme1n1p1" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd2", 00:06:57.977 "bdev_name": "Nvme1n1p2" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd3", 00:06:57.977 "bdev_name": "Nvme2n1" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd4", 00:06:57.977 "bdev_name": "Nvme2n2" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd5", 00:06:57.977 "bdev_name": "Nvme2n3" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd6", 00:06:57.977 "bdev_name": "Nvme3n1" 00:06:57.977 } 00:06:57.977 ]' 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd0", 00:06:57.977 "bdev_name": "Nvme0n1" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd1", 00:06:57.977 "bdev_name": "Nvme1n1p1" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd2", 00:06:57.977 "bdev_name": "Nvme1n1p2" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd3", 00:06:57.977 "bdev_name": "Nvme2n1" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd4", 00:06:57.977 "bdev_name": "Nvme2n2" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd5", 00:06:57.977 "bdev_name": "Nvme2n3" 00:06:57.977 }, 00:06:57.977 { 00:06:57.977 "nbd_device": "/dev/nbd6", 00:06:57.977 "bdev_name": "Nvme3n1" 00:06:57.977 } 00:06:57.977 ]' 00:06:57.977 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.237 10:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.495 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.495 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.495 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.495 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.495 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.495 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.495 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.495 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.495 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.495 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:58.753 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:58.753 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:58.753 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:58.753 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.753 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.753 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:58.753 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.753 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.753 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.753 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:59.013 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:59.013 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:59.013 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:59.013 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.013 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.013 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:59.013 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.013 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.013 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.013 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:59.274 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:59.274 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:59.274 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:59.274 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.274 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.274 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:59.274 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.275 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.275 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.275 10:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:59.275 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:59.275 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:59.275 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.535 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.796 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:00.057 /dev/nbd0 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.057 1+0 records in 00:07:00.057 1+0 records out 00:07:00.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122692 s, 3.3 MB/s 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.057 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:00.345 /dev/nbd1 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.345 1+0 records in 00:07:00.345 1+0 records out 00:07:00.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817678 s, 5.0 MB/s 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.345 10:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:00.606 /dev/nbd10 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.606 1+0 records in 00:07:00.606 1+0 records out 00:07:00.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106303 s, 3.9 MB/s 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.606 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.607 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.607 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.607 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.607 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.607 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:00.867 /dev/nbd11 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.867 1+0 records in 00:07:00.867 1+0 records out 00:07:00.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117978 s, 3.5 MB/s 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.867 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:01.128 /dev/nbd12 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.128 1+0 records in 00:07:01.128 1+0 records out 00:07:01.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076449 s, 5.4 MB/s 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.128 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:01.128 /dev/nbd13 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.390 1+0 records in 00:07:01.390 1+0 records out 00:07:01.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105052 s, 3.9 MB/s 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.390 10:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:01.391 /dev/nbd14 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.391 1+0 records in 00:07:01.391 1+0 records out 00:07:01.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114442 s, 3.6 MB/s 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.391 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd0", 00:07:01.652 "bdev_name": "Nvme0n1" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd1", 00:07:01.652 "bdev_name": "Nvme1n1p1" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd10", 00:07:01.652 "bdev_name": "Nvme1n1p2" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd11", 00:07:01.652 "bdev_name": "Nvme2n1" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd12", 00:07:01.652 "bdev_name": "Nvme2n2" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd13", 00:07:01.652 "bdev_name": "Nvme2n3" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd14", 00:07:01.652 "bdev_name": "Nvme3n1" 00:07:01.652 } 00:07:01.652 ]' 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd0", 00:07:01.652 "bdev_name": "Nvme0n1" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd1", 00:07:01.652 "bdev_name": "Nvme1n1p1" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd10", 00:07:01.652 "bdev_name": "Nvme1n1p2" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd11", 00:07:01.652 "bdev_name": "Nvme2n1" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd12", 00:07:01.652 "bdev_name": "Nvme2n2" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd13", 00:07:01.652 "bdev_name": "Nvme2n3" 00:07:01.652 }, 00:07:01.652 { 00:07:01.652 "nbd_device": "/dev/nbd14", 00:07:01.652 "bdev_name": "Nvme3n1" 00:07:01.652 } 00:07:01.652 ]' 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.652 /dev/nbd1 00:07:01.652 /dev/nbd10 00:07:01.652 /dev/nbd11 00:07:01.652 /dev/nbd12 00:07:01.652 /dev/nbd13 00:07:01.652 /dev/nbd14' 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.652 /dev/nbd1 00:07:01.652 /dev/nbd10 00:07:01.652 /dev/nbd11 00:07:01.652 /dev/nbd12 00:07:01.652 /dev/nbd13 00:07:01.652 /dev/nbd14' 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:01.652 256+0 records in 00:07:01.652 256+0 records out 00:07:01.652 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00805406 s, 130 MB/s 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.652 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.914 256+0 records in 00:07:01.914 256+0 records out 00:07:01.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.217426 s, 4.8 MB/s 00:07:01.914 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.914 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.175 256+0 records in 00:07:02.175 256+0 records out 00:07:02.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229608 s, 4.6 MB/s 00:07:02.175 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.175 10:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:02.435 256+0 records in 00:07:02.435 256+0 records out 00:07:02.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.20708 s, 5.1 MB/s 00:07:02.435 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.435 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:02.696 256+0 records in 00:07:02.696 256+0 records out 00:07:02.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216371 s, 4.8 MB/s 00:07:02.696 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.696 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:02.957 256+0 records in 00:07:02.957 256+0 records out 00:07:02.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18761 s, 5.6 MB/s 00:07:02.957 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.957 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:02.957 256+0 records in 00:07:02.957 256+0 records out 00:07:02.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23461 s, 4.5 MB/s 00:07:02.957 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.957 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:03.218 256+0 records in 00:07:03.218 256+0 records out 00:07:03.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238805 s, 4.4 MB/s 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.218 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:03.478 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.478 10:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.478 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.479 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.479 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.479 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.479 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.479 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.479 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.479 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.740 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.740 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.740 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.740 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.740 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.740 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.740 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.740 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.740 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.740 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:04.000 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:04.000 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:04.000 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:04.000 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.000 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.000 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:04.000 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.000 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.000 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.000 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:04.261 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:04.261 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:04.261 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:04.261 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.261 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.261 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:04.261 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.261 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.261 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.261 10:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.522 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:04.782 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:04.782 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:04.782 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:04.782 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.782 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.782 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:04.782 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.782 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.782 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.782 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.783 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.042 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:05.043 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:05.303 malloc_lvol_verify 00:07:05.303 10:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:05.562 73855518-cdad-4871-8657-5aa457f0a560 00:07:05.562 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:05.821 6c492470-d798-423d-8ffd-90add4eaa4b5 00:07:05.821 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:05.821 /dev/nbd0 00:07:05.821 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:05.821 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:05.821 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:05.821 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:05.822 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:05.822 mke2fs 1.47.0 (5-Feb-2023) 00:07:05.822 Discarding device blocks: 0/4096 done 00:07:05.822 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:05.822 00:07:05.822 Allocating group tables: 0/1 done 00:07:05.822 Writing inode tables: 0/1 done 00:07:05.822 Creating journal (1024 blocks): done 00:07:05.822 Writing superblocks and filesystem accounting information: 0/1 done 00:07:05.822 00:07:05.822 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:05.822 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.822 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:05.822 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.822 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:05.822 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.822 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74688 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 74688 ']' 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 74688 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74688 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74688' 00:07:06.081 killing process with pid 74688 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 74688 00:07:06.081 10:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 74688 00:07:06.341 10:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:06.341 00:07:06.341 real 0m11.075s 00:07:06.341 user 0m15.447s 00:07:06.341 sys 0m3.715s 00:07:06.341 10:27:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.341 10:27:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:06.341 ************************************ 00:07:06.341 END TEST bdev_nbd 00:07:06.341 ************************************ 00:07:06.341 skipping fio tests on NVMe due to multi-ns failures. 00:07:06.341 10:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:06.341 10:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:06.341 10:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:06.341 10:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:06.341 10:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:06.341 10:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:06.341 10:27:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:06.341 10:27:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.341 10:27:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.341 ************************************ 00:07:06.341 START TEST bdev_verify 00:07:06.341 ************************************ 00:07:06.341 10:27:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:06.602 [2024-09-28 10:27:41.132174] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:06.602 [2024-09-28 10:27:41.132283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75100 ] 00:07:06.602 [2024-09-28 10:27:41.260664] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.602 [2024-09-28 10:27:41.279431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.602 [2024-09-28 10:27:41.314983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.602 [2024-09-28 10:27:41.315069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.172 Running I/O for 5 seconds... 00:07:12.396 19072.00 IOPS, 74.50 MiB/s 19392.00 IOPS, 75.75 MiB/s 19072.00 IOPS, 74.50 MiB/s 19440.00 IOPS, 75.94 MiB/s 19046.40 IOPS, 74.40 MiB/s 00:07:12.396 Latency(us) 00:07:12.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.396 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x0 length 0xbd0bd 00:07:12.396 Nvme0n1 : 5.08 1361.17 5.32 0.00 0.00 93719.11 19862.45 100018.02 00:07:12.396 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:12.396 Nvme0n1 : 5.07 1312.21 5.13 0.00 0.00 97238.01 22181.42 100018.02 00:07:12.396 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x0 length 0x4ff80 00:07:12.396 Nvme1n1p1 : 5.08 1360.46 5.31 0.00 0.00 93609.23 21979.77 90742.15 00:07:12.396 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:12.396 Nvme1n1p1 : 5.07 1311.76 5.12 0.00 0.00 96986.84 24500.38 87515.77 00:07:12.396 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x0 length 0x4ff7f 00:07:12.396 Nvme1n1p2 : 5.08 1360.07 5.31 0.00 0.00 93392.73 25105.33 81869.59 00:07:12.396 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:12.396 Nvme1n1p2 : 5.08 1311.32 5.12 0.00 0.00 96835.89 23895.43 83079.48 00:07:12.396 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x0 length 0x80000 00:07:12.396 Nvme2n1 : 5.08 1359.70 5.31 0.00 0.00 93217.49 26819.35 78239.90 00:07:12.396 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x80000 length 0x80000 00:07:12.396 Nvme2n1 : 5.08 1310.95 5.12 0.00 0.00 96654.08 24097.08 72997.02 00:07:12.396 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x0 length 0x80000 00:07:12.396 Nvme2n2 : 5.08 1359.33 5.31 0.00 0.00 93034.63 25710.28 77433.30 00:07:12.396 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x80000 length 0x80000 00:07:12.396 Nvme2n2 : 5.08 1310.55 5.12 0.00 0.00 96461.57 23996.26 76626.71 00:07:12.396 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x0 length 0x80000 00:07:12.396 Nvme2n3 : 5.10 1368.69 5.35 0.00 0.00 92248.97 4713.55 79449.80 00:07:12.396 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x80000 length 0x80000 00:07:12.396 Nvme2n3 : 5.09 1320.30 5.16 0.00 0.00 95581.58 3213.78 78643.20 00:07:12.396 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x0 length 0x20000 00:07:12.396 Nvme3n1 : 5.10 1368.32 5.35 0.00 0.00 92077.26 5041.23 81869.59 00:07:12.396 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.396 Verification LBA range: start 0x20000 length 0x20000 00:07:12.396 Nvme3n1 : 5.10 1329.78 5.19 0.00 0.00 94770.79 7410.61 80256.39 00:07:12.396 =================================================================================================================== 00:07:12.396 Total : 18744.62 73.22 0.00 0.00 94668.08 3213.78 100018.02 00:07:13.338 00:07:13.338 real 0m6.748s 00:07:13.338 user 0m12.747s 00:07:13.338 sys 0m0.212s 00:07:13.338 ************************************ 00:07:13.338 10:27:47 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.338 10:27:47 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.338 END TEST bdev_verify 00:07:13.338 ************************************ 00:07:13.338 10:27:47 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.338 10:27:47 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:13.338 10:27:47 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.338 10:27:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.338 ************************************ 00:07:13.338 START TEST bdev_verify_big_io 00:07:13.338 ************************************ 00:07:13.338 10:27:47 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.338 [2024-09-28 10:27:47.947486] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:13.338 [2024-09-28 10:27:47.947600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75193 ] 00:07:13.338 [2024-09-28 10:27:48.080776] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:13.338 [2024-09-28 10:27:48.101881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.600 [2024-09-28 10:27:48.139081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.600 [2024-09-28 10:27:48.139178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.861 Running I/O for 5 seconds... 00:07:20.270 1511.00 IOPS, 94.44 MiB/s 2904.50 IOPS, 181.53 MiB/s 00:07:20.270 Latency(us) 00:07:20.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.271 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x0 length 0xbd0b 00:07:20.271 Nvme0n1 : 5.79 100.94 6.31 0.00 0.00 1209899.79 22483.89 1258291.20 00:07:20.271 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:20.271 Nvme0n1 : 5.98 94.93 5.93 0.00 0.00 1278870.96 32868.82 1832588.21 00:07:20.271 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x0 length 0x4ff8 00:07:20.271 Nvme1n1p1 : 5.93 88.66 5.54 0.00 0.00 1347569.29 79046.50 2155226.98 00:07:20.271 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:20.271 Nvme1n1p1 : 5.99 98.55 6.16 0.00 0.00 1213592.06 55655.19 1858399.31 00:07:20.271 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x0 length 0x4ff7 00:07:20.271 Nvme1n1p2 : 6.02 76.75 4.80 0.00 0.00 1495298.01 140347.86 2181038.08 00:07:20.271 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:20.271 Nvme1n1p2 : 6.03 98.46 6.15 0.00 0.00 1167197.35 69770.63 1897115.96 00:07:20.271 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x0 length 0x8000 00:07:20.271 Nvme2n1 : 6.02 110.58 6.91 0.00 0.00 1019298.71 85499.27 1180857.90 00:07:20.271 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x8000 length 0x8000 00:07:20.271 Nvme2n1 : 6.04 102.24 6.39 0.00 0.00 1097059.24 44967.78 1922927.06 00:07:20.271 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x0 length 0x8000 00:07:20.271 Nvme2n2 : 6.06 116.18 7.26 0.00 0.00 947992.85 30852.33 1103424.59 00:07:20.271 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x8000 length 0x8000 00:07:20.271 Nvme2n2 : 6.08 106.87 6.68 0.00 0.00 1016867.90 44967.78 1948738.17 00:07:20.271 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x0 length 0x8000 00:07:20.271 Nvme2n3 : 6.13 120.97 7.56 0.00 0.00 878367.34 24802.86 1135688.47 00:07:20.271 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x8000 length 0x8000 00:07:20.271 Nvme2n3 : 6.14 112.46 7.03 0.00 0.00 931736.90 47387.57 1974549.27 00:07:20.271 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x0 length 0x2000 00:07:20.271 Nvme3n1 : 6.14 135.40 8.46 0.00 0.00 766100.00 4133.81 1167952.34 00:07:20.271 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.271 Verification LBA range: start 0x2000 length 0x2000 00:07:20.271 Nvme3n1 : 6.20 149.59 9.35 0.00 0.00 684325.60 920.02 1458327.24 00:07:20.271 =================================================================================================================== 00:07:20.271 Total : 1512.59 94.54 0.00 0.00 1037189.69 920.02 2181038.08 00:07:20.840 00:07:20.840 real 0m7.654s 00:07:20.840 user 0m14.555s 00:07:20.840 sys 0m0.222s 00:07:20.840 ************************************ 00:07:20.840 END TEST bdev_verify_big_io 00:07:20.840 ************************************ 00:07:20.840 10:27:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.840 10:27:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:20.840 10:27:55 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.840 10:27:55 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:20.840 10:27:55 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.840 10:27:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.840 ************************************ 00:07:20.840 START TEST bdev_write_zeroes 00:07:20.840 ************************************ 00:07:20.840 10:27:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.100 [2024-09-28 10:27:55.664815] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:21.101 [2024-09-28 10:27:55.664930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75291 ] 00:07:21.101 [2024-09-28 10:27:55.792738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.101 [2024-09-28 10:27:55.813829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.101 [2024-09-28 10:27:55.846310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.672 Running I/O for 1 seconds... 00:07:22.610 57344.00 IOPS, 224.00 MiB/s 00:07:22.610 Latency(us) 00:07:22.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.610 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.610 Nvme0n1 : 1.02 8184.18 31.97 0.00 0.00 15604.06 6805.66 29239.14 00:07:22.610 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.610 Nvme1n1p1 : 1.03 8174.20 31.93 0.00 0.00 15602.57 12300.60 26214.40 00:07:22.610 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.610 Nvme1n1p2 : 1.03 8163.70 31.89 0.00 0.00 15543.86 12199.78 23290.49 00:07:22.610 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.610 Nvme2n1 : 1.03 8154.08 31.85 0.00 0.00 15497.75 12451.84 22383.06 00:07:22.610 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.610 Nvme2n2 : 1.03 8144.91 31.82 0.00 0.00 15465.77 9527.93 22181.42 00:07:22.610 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.610 Nvme2n3 : 1.03 8135.77 31.78 0.00 0.00 15442.55 8771.74 23290.49 00:07:22.610 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.610 Nvme3n1 : 1.03 8126.52 31.74 0.00 0.00 15433.42 8469.27 24702.03 00:07:22.610 =================================================================================================================== 00:07:22.610 Total : 57083.36 222.98 0.00 0.00 15512.85 6805.66 29239.14 00:07:22.870 00:07:22.870 real 0m1.853s 00:07:22.870 user 0m1.581s 00:07:22.870 sys 0m0.160s 00:07:22.870 10:27:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.870 ************************************ 00:07:22.870 END TEST bdev_write_zeroes 00:07:22.870 ************************************ 00:07:22.870 10:27:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:22.870 10:27:57 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.870 10:27:57 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:22.870 10:27:57 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.870 10:27:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.870 ************************************ 00:07:22.870 START TEST bdev_json_nonenclosed 00:07:22.870 ************************************ 00:07:22.870 10:27:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.870 [2024-09-28 10:27:57.579203] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:22.870 [2024-09-28 10:27:57.579319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75333 ] 00:07:23.134 [2024-09-28 10:27:57.706638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.134 [2024-09-28 10:27:57.725417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.134 [2024-09-28 10:27:57.758748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.134 [2024-09-28 10:27:57.758833] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:23.134 [2024-09-28 10:27:57.758849] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:23.134 [2024-09-28 10:27:57.758861] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.134 00:07:23.134 real 0m0.320s 00:07:23.134 user 0m0.117s 00:07:23.134 sys 0m0.101s 00:07:23.134 10:27:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.134 ************************************ 00:07:23.134 END TEST bdev_json_nonenclosed 00:07:23.134 ************************************ 00:07:23.134 10:27:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:23.134 10:27:57 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.134 10:27:57 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:23.134 10:27:57 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.134 10:27:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.134 ************************************ 00:07:23.134 START TEST bdev_json_nonarray 00:07:23.134 ************************************ 00:07:23.134 10:27:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.395 [2024-09-28 10:27:57.956378] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:23.395 [2024-09-28 10:27:57.956492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75353 ] 00:07:23.395 [2024-09-28 10:27:58.084435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.395 [2024-09-28 10:27:58.102320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.395 [2024-09-28 10:27:58.135341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.395 [2024-09-28 10:27:58.135429] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:23.395 [2024-09-28 10:27:58.135445] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:23.395 [2024-09-28 10:27:58.135454] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.654 00:07:23.654 real 0m0.310s 00:07:23.654 user 0m0.120s 00:07:23.654 sys 0m0.087s 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.654 ************************************ 00:07:23.654 END TEST bdev_json_nonarray 00:07:23.654 ************************************ 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:23.654 10:27:58 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:23.654 10:27:58 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:23.654 10:27:58 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:23.654 10:27:58 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.654 10:27:58 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.654 10:27:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.654 ************************************ 00:07:23.654 START TEST bdev_gpt_uuid 00:07:23.654 ************************************ 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75373 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75373 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 75373 ']' 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.654 10:27:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:23.654 [2024-09-28 10:27:58.338570] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:23.654 [2024-09-28 10:27:58.338693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75373 ] 00:07:23.914 [2024-09-28 10:27:58.466696] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.914 [2024-09-28 10:27:58.486040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.914 [2024-09-28 10:27:58.519729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.480 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.480 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:24.480 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:24.480 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.480 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:24.737 Some configs were skipped because the RPC state that can call them passed over. 00:07:24.737 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.737 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:24.737 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.737 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:24.737 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.737 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:24.737 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.737 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:24.737 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.738 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:24.738 { 00:07:24.738 "name": "Nvme1n1p1", 00:07:24.738 "aliases": [ 00:07:24.738 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:24.738 ], 00:07:24.738 "product_name": "GPT Disk", 00:07:24.738 "block_size": 4096, 00:07:24.738 "num_blocks": 655104, 00:07:24.738 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:24.738 "assigned_rate_limits": { 00:07:24.738 "rw_ios_per_sec": 0, 00:07:24.738 "rw_mbytes_per_sec": 0, 00:07:24.738 "r_mbytes_per_sec": 0, 00:07:24.738 "w_mbytes_per_sec": 0 00:07:24.738 }, 00:07:24.738 "claimed": false, 00:07:24.738 "zoned": false, 00:07:24.738 "supported_io_types": { 00:07:24.738 "read": true, 00:07:24.738 "write": true, 00:07:24.738 "unmap": true, 00:07:24.738 "flush": true, 00:07:24.738 "reset": true, 00:07:24.738 "nvme_admin": false, 00:07:24.738 "nvme_io": false, 00:07:24.738 "nvme_io_md": false, 00:07:24.738 "write_zeroes": true, 00:07:24.738 "zcopy": false, 00:07:24.738 "get_zone_info": false, 00:07:24.738 "zone_management": false, 00:07:24.738 "zone_append": false, 00:07:24.738 "compare": true, 00:07:24.738 "compare_and_write": false, 00:07:24.738 "abort": true, 00:07:24.738 "seek_hole": false, 00:07:24.738 "seek_data": false, 00:07:24.738 "copy": true, 00:07:24.738 "nvme_iov_md": false 00:07:24.738 }, 00:07:24.738 "driver_specific": { 00:07:24.738 "gpt": { 00:07:24.738 "base_bdev": "Nvme1n1", 00:07:24.738 "offset_blocks": 256, 00:07:24.738 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:24.738 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:24.738 "partition_name": "SPDK_TEST_first" 00:07:24.738 } 00:07:24.738 } 00:07:24.738 } 00:07:24.738 ]' 00:07:24.738 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:24.996 { 00:07:24.996 "name": "Nvme1n1p2", 00:07:24.996 "aliases": [ 00:07:24.996 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:24.996 ], 00:07:24.996 "product_name": "GPT Disk", 00:07:24.996 "block_size": 4096, 00:07:24.996 "num_blocks": 655103, 00:07:24.996 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:24.996 "assigned_rate_limits": { 00:07:24.996 "rw_ios_per_sec": 0, 00:07:24.996 "rw_mbytes_per_sec": 0, 00:07:24.996 "r_mbytes_per_sec": 0, 00:07:24.996 "w_mbytes_per_sec": 0 00:07:24.996 }, 00:07:24.996 "claimed": false, 00:07:24.996 "zoned": false, 00:07:24.996 "supported_io_types": { 00:07:24.996 "read": true, 00:07:24.996 "write": true, 00:07:24.996 "unmap": true, 00:07:24.996 "flush": true, 00:07:24.996 "reset": true, 00:07:24.996 "nvme_admin": false, 00:07:24.996 "nvme_io": false, 00:07:24.996 "nvme_io_md": false, 00:07:24.996 "write_zeroes": true, 00:07:24.996 "zcopy": false, 00:07:24.996 "get_zone_info": false, 00:07:24.996 "zone_management": false, 00:07:24.996 "zone_append": false, 00:07:24.996 "compare": true, 00:07:24.996 "compare_and_write": false, 00:07:24.996 "abort": true, 00:07:24.996 "seek_hole": false, 00:07:24.996 "seek_data": false, 00:07:24.996 "copy": true, 00:07:24.996 "nvme_iov_md": false 00:07:24.996 }, 00:07:24.996 "driver_specific": { 00:07:24.996 "gpt": { 00:07:24.996 "base_bdev": "Nvme1n1", 00:07:24.996 "offset_blocks": 655360, 00:07:24.996 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:24.996 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:24.996 "partition_name": "SPDK_TEST_second" 00:07:24.996 } 00:07:24.996 } 00:07:24.996 } 00:07:24.996 ]' 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 75373 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 75373 ']' 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 75373 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75373 00:07:24.996 killing process with pid 75373 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75373' 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 75373 00:07:24.996 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 75373 00:07:25.254 00:07:25.254 real 0m1.729s 00:07:25.254 user 0m1.887s 00:07:25.254 sys 0m0.328s 00:07:25.254 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.254 ************************************ 00:07:25.254 END TEST bdev_gpt_uuid 00:07:25.254 ************************************ 00:07:25.254 10:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:25.254 10:28:00 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:25.254 10:28:00 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:25.254 10:28:00 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:25.254 10:28:00 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:25.254 10:28:00 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:25.254 10:28:00 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:25.254 10:28:00 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:25.254 10:28:00 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:25.254 10:28:00 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:25.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:25.771 Waiting for block devices as requested 00:07:25.771 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:25.771 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:26.029 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:26.030 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.314 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:31.314 10:28:05 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:31.314 10:28:05 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:31.572 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:31.572 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:31.572 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:31.572 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:31.572 10:28:06 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:31.572 00:07:31.572 real 0m49.436s 00:07:31.572 user 1m2.379s 00:07:31.572 sys 0m7.784s 00:07:31.572 10:28:06 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.572 10:28:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.572 ************************************ 00:07:31.572 END TEST blockdev_nvme_gpt 00:07:31.572 ************************************ 00:07:31.572 10:28:06 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:31.572 10:28:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.572 10:28:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.572 10:28:06 -- common/autotest_common.sh@10 -- # set +x 00:07:31.572 ************************************ 00:07:31.572 START TEST nvme 00:07:31.572 ************************************ 00:07:31.572 10:28:06 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:31.572 * Looking for test storage... 00:07:31.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:31.572 10:28:06 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:31.572 10:28:06 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:31.572 10:28:06 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:31.572 10:28:06 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:31.572 10:28:06 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.572 10:28:06 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.572 10:28:06 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.572 10:28:06 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.572 10:28:06 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.572 10:28:06 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.572 10:28:06 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.572 10:28:06 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.572 10:28:06 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.572 10:28:06 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.572 10:28:06 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.572 10:28:06 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:31.572 10:28:06 nvme -- scripts/common.sh@345 -- # : 1 00:07:31.572 10:28:06 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.572 10:28:06 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.572 10:28:06 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:31.572 10:28:06 nvme -- scripts/common.sh@353 -- # local d=1 00:07:31.572 10:28:06 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.572 10:28:06 nvme -- scripts/common.sh@355 -- # echo 1 00:07:31.572 10:28:06 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.572 10:28:06 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:31.572 10:28:06 nvme -- scripts/common.sh@353 -- # local d=2 00:07:31.572 10:28:06 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.572 10:28:06 nvme -- scripts/common.sh@355 -- # echo 2 00:07:31.572 10:28:06 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.572 10:28:06 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.572 10:28:06 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.572 10:28:06 nvme -- scripts/common.sh@368 -- # return 0 00:07:31.572 10:28:06 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.572 10:28:06 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:31.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.572 --rc genhtml_branch_coverage=1 00:07:31.572 --rc genhtml_function_coverage=1 00:07:31.572 --rc genhtml_legend=1 00:07:31.572 --rc geninfo_all_blocks=1 00:07:31.572 --rc geninfo_unexecuted_blocks=1 00:07:31.572 00:07:31.572 ' 00:07:31.572 10:28:06 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:31.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.572 --rc genhtml_branch_coverage=1 00:07:31.572 --rc genhtml_function_coverage=1 00:07:31.572 --rc genhtml_legend=1 00:07:31.572 --rc geninfo_all_blocks=1 00:07:31.572 --rc geninfo_unexecuted_blocks=1 00:07:31.572 00:07:31.572 ' 00:07:31.572 10:28:06 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:31.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.572 --rc genhtml_branch_coverage=1 00:07:31.572 --rc genhtml_function_coverage=1 00:07:31.572 --rc genhtml_legend=1 00:07:31.572 --rc geninfo_all_blocks=1 00:07:31.572 --rc geninfo_unexecuted_blocks=1 00:07:31.572 00:07:31.572 ' 00:07:31.572 10:28:06 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:31.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.572 --rc genhtml_branch_coverage=1 00:07:31.572 --rc genhtml_function_coverage=1 00:07:31.572 --rc genhtml_legend=1 00:07:31.572 --rc geninfo_all_blocks=1 00:07:31.572 --rc geninfo_unexecuted_blocks=1 00:07:31.572 00:07:31.572 ' 00:07:31.572 10:28:06 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:32.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:32.397 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.397 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.655 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.655 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.655 10:28:07 nvme -- nvme/nvme.sh@79 -- # uname 00:07:32.655 10:28:07 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:32.655 10:28:07 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:32.655 10:28:07 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:32.655 10:28:07 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:32.655 10:28:07 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:32.655 10:28:07 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:32.655 10:28:07 nvme -- common/autotest_common.sh@1071 -- # stubpid=75997 00:07:32.655 10:28:07 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:32.655 Waiting for stub to ready for secondary processes... 00:07:32.656 10:28:07 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:32.656 10:28:07 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:32.656 10:28:07 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75997 ]] 00:07:32.656 10:28:07 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:32.656 [2024-09-28 10:28:07.317179] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:32.656 [2024-09-28 10:28:07.317289] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:33.590 [2024-09-28 10:28:08.055826] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.590 [2024-09-28 10:28:08.077462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.590 [2024-09-28 10:28:08.098104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.590 [2024-09-28 10:28:08.098250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.590 [2024-09-28 10:28:08.098271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.590 [2024-09-28 10:28:08.107926] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:33.590 [2024-09-28 10:28:08.107980] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:33.590 [2024-09-28 10:28:08.116626] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:33.590 [2024-09-28 10:28:08.116841] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:33.590 [2024-09-28 10:28:08.117603] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:33.590 [2024-09-28 10:28:08.117977] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:33.590 [2024-09-28 10:28:08.118083] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:33.590 [2024-09-28 10:28:08.118785] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:33.590 [2024-09-28 10:28:08.119059] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:33.590 [2024-09-28 10:28:08.119173] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:33.590 [2024-09-28 10:28:08.119996] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:33.590 [2024-09-28 10:28:08.120225] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:33.590 [2024-09-28 10:28:08.120372] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:33.590 [2024-09-28 10:28:08.120469] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:33.590 [2024-09-28 10:28:08.120577] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:33.590 10:28:08 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:33.590 done. 00:07:33.590 10:28:08 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:33.590 10:28:08 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:33.590 10:28:08 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:33.590 10:28:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.590 10:28:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.590 ************************************ 00:07:33.590 START TEST nvme_reset 00:07:33.590 ************************************ 00:07:33.591 10:28:08 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:33.850 Initializing NVMe Controllers 00:07:33.850 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:33.850 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:33.850 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:33.850 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:33.850 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:33.850 00:07:33.850 real 0m0.182s 00:07:33.850 user 0m0.054s 00:07:33.850 sys 0m0.085s 00:07:33.850 10:28:08 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.850 10:28:08 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:33.850 ************************************ 00:07:33.850 END TEST nvme_reset 00:07:33.850 ************************************ 00:07:33.850 10:28:08 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:33.850 10:28:08 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.850 10:28:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.850 10:28:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.850 ************************************ 00:07:33.850 START TEST nvme_identify 00:07:33.850 ************************************ 00:07:33.850 10:28:08 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:33.850 10:28:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:33.850 10:28:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:33.850 10:28:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:33.850 10:28:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:33.850 10:28:08 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:33.850 10:28:08 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:33.850 10:28:08 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:33.850 10:28:08 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:33.850 10:28:08 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:33.850 10:28:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:33.850 10:28:08 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:33.851 10:28:08 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:34.114 [2024-09-28 10:28:08.757845] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 76019 terminated unexpected 00:07:34.114 ===================================================== 00:07:34.114 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:34.114 ===================================================== 00:07:34.114 Controller Capabilities/Features 00:07:34.114 ================================ 00:07:34.114 Vendor ID: 1b36 00:07:34.114 Subsystem Vendor ID: 1af4 00:07:34.114 Serial Number: 12340 00:07:34.114 Model Number: QEMU NVMe Ctrl 00:07:34.114 Firmware Version: 8.0.0 00:07:34.114 Recommended Arb Burst: 6 00:07:34.114 IEEE OUI Identifier: 00 54 52 00:07:34.114 Multi-path I/O 00:07:34.114 May have multiple subsystem ports: No 00:07:34.114 May have multiple controllers: No 00:07:34.114 Associated with SR-IOV VF: No 00:07:34.114 Max Data Transfer Size: 524288 00:07:34.114 Max Number of Namespaces: 256 00:07:34.114 Max Number of I/O Queues: 64 00:07:34.114 NVMe Specification Version (VS): 1.4 00:07:34.114 NVMe Specification Version (Identify): 1.4 00:07:34.114 Maximum Queue Entries: 2048 00:07:34.114 Contiguous Queues Required: Yes 00:07:34.114 Arbitration Mechanisms Supported 00:07:34.114 Weighted Round Robin: Not Supported 00:07:34.114 Vendor Specific: Not Supported 00:07:34.114 Reset Timeout: 7500 ms 00:07:34.114 Doorbell Stride: 4 bytes 00:07:34.114 NVM Subsystem Reset: Not Supported 00:07:34.114 Command Sets Supported 00:07:34.114 NVM Command Set: Supported 00:07:34.114 Boot Partition: Not Supported 00:07:34.114 Memory Page Size Minimum: 4096 bytes 00:07:34.114 Memory Page Size Maximum: 65536 bytes 00:07:34.114 Persistent Memory Region: Not Supported 00:07:34.114 Optional Asynchronous Events Supported 00:07:34.114 Namespace Attribute Notices: Supported 00:07:34.114 Firmware Activation Notices: Not Supported 00:07:34.114 ANA Change Notices: Not Supported 00:07:34.114 PLE Aggregate Log Change Notices: Not Supported 00:07:34.114 LBA Status Info Alert Notices: Not Supported 00:07:34.114 EGE Aggregate Log Change Notices: Not Supported 00:07:34.114 Normal NVM Subsystem Shutdown event: Not Supported 00:07:34.114 Zone Descriptor Change Notices: Not Supported 00:07:34.114 Discovery Log Change Notices: Not Supported 00:07:34.114 Controller Attributes 00:07:34.114 128-bit Host Identifier: Not Supported 00:07:34.114 Non-Operational Permissive Mode: Not Supported 00:07:34.114 NVM Sets: Not Supported 00:07:34.114 Read Recovery Levels: Not Supported 00:07:34.114 Endurance Groups: Not Supported 00:07:34.114 Predictable Latency Mode: Not Supported 00:07:34.114 Traffic Based Keep ALive: Not Supported 00:07:34.114 Namespace Granularity: Not Supported 00:07:34.114 SQ Associations: Not Supported 00:07:34.114 UUID List: Not Supported 00:07:34.114 Multi-Domain Subsystem: Not Supported 00:07:34.114 Fixed Capacity Management: Not Supported 00:07:34.114 Variable Capacity Management: Not Supported 00:07:34.114 Delete Endurance Group: Not Supported 00:07:34.114 Delete NVM Set: Not Supported 00:07:34.114 Extended LBA Formats Supported: Supported 00:07:34.114 Flexible Data Placement Supported: Not Supported 00:07:34.114 00:07:34.114 Controller Memory Buffer Support 00:07:34.114 ================================ 00:07:34.114 Supported: No 00:07:34.114 00:07:34.114 Persistent Memory Region Support 00:07:34.114 ================================ 00:07:34.114 Supported: No 00:07:34.114 00:07:34.114 Admin Command Set Attributes 00:07:34.114 ============================ 00:07:34.115 Security Send/Receive: Not Supported 00:07:34.115 Format NVM: Supported 00:07:34.115 Firmware Activate/Download: Not Supported 00:07:34.115 Namespace Management: Supported 00:07:34.115 Device Self-Test: Not Supported 00:07:34.115 Directives: Supported 00:07:34.115 NVMe-MI: Not Supported 00:07:34.115 Virtualization Management: Not Supported 00:07:34.115 Doorbell Buffer Config: Supported 00:07:34.115 Get LBA Status Capability: Not Supported 00:07:34.115 Command & Feature Lockdown Capability: Not Supported 00:07:34.115 Abort Command Limit: 4 00:07:34.115 Async Event Request Limit: 4 00:07:34.115 Number of Firmware Slots: N/A 00:07:34.115 Firmware Slot 1 Read-Only: N/A 00:07:34.115 Firmware Activation Without Reset: N/A 00:07:34.115 Multiple Update Detection Support: N/A 00:07:34.115 Firmware Update Granularity: No Information Provided 00:07:34.115 Per-Namespace SMART Log: Yes 00:07:34.115 Asymmetric Namespace Access Log Page: Not Supported 00:07:34.115 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:34.115 Command Effects Log Page: Supported 00:07:34.115 Get Log Page Extended Data: Supported 00:07:34.115 Telemetry Log Pages: Not Supported 00:07:34.115 Persistent Event Log Pages: Not Supported 00:07:34.115 Supported Log Pages Log Page: May Support 00:07:34.115 Commands Supported & Effects Log Page: Not Supported 00:07:34.115 Feature Identifiers & Effects Log Page:May Support 00:07:34.115 NVMe-MI Commands & Effects Log Page: May Support 00:07:34.115 Data Area 4 for Telemetry Log: Not Supported 00:07:34.115 Error Log Page Entries Supported: 1 00:07:34.115 Keep Alive: Not Supported 00:07:34.115 00:07:34.115 NVM Command Set Attributes 00:07:34.115 ========================== 00:07:34.115 Submission Queue Entry Size 00:07:34.115 Max: 64 00:07:34.115 Min: 64 00:07:34.115 Completion Queue Entry Size 00:07:34.115 Max: 16 00:07:34.115 Min: 16 00:07:34.115 Number of Namespaces: 256 00:07:34.115 Compare Command: Supported 00:07:34.115 Write Uncorrectable Command: Not Supported 00:07:34.115 Dataset Management Command: Supported 00:07:34.115 Write Zeroes Command: Supported 00:07:34.115 Set Features Save Field: Supported 00:07:34.115 Reservations: Not Supported 00:07:34.115 Timestamp: Supported 00:07:34.115 Copy: Supported 00:07:34.115 Volatile Write Cache: Present 00:07:34.115 Atomic Write Unit (Normal): 1 00:07:34.115 Atomic Write Unit (PFail): 1 00:07:34.115 Atomic Compare & Write Unit: 1 00:07:34.115 Fused Compare & Write: Not Supported 00:07:34.115 Scatter-Gather List 00:07:34.115 SGL Command Set: Supported 00:07:34.115 SGL Keyed: Not Supported 00:07:34.115 SGL Bit Bucket Descriptor: Not Supported 00:07:34.115 SGL Metadata Pointer: Not Supported 00:07:34.115 Oversized SGL: Not Supported 00:07:34.115 SGL Metadata Address: Not Supported 00:07:34.115 SGL Offset: Not Supported 00:07:34.115 Transport SGL Data Block: Not Supported 00:07:34.115 Replay Protected Memory Block: Not Supported 00:07:34.115 00:07:34.115 Firmware Slot Information 00:07:34.115 ========================= 00:07:34.115 Active slot: 1 00:07:34.115 Slot 1 Firmware Revision: 1.0 00:07:34.115 00:07:34.115 00:07:34.115 Commands Supported and Effects 00:07:34.115 ============================== 00:07:34.115 Admin Commands 00:07:34.115 -------------- 00:07:34.115 Delete I/O Submission Queue (00h): Supported 00:07:34.115 Create I/O Submission Queue (01h): Supported 00:07:34.115 Get Log Page (02h): Supported 00:07:34.115 Delete I/O Completion Queue (04h): Supported 00:07:34.115 Create I/O Completion Queue (05h): Supported 00:07:34.115 Identify (06h): Supported 00:07:34.115 Abort (08h): Supported 00:07:34.115 Set Features (09h): Supported 00:07:34.115 Get Features (0Ah): Supported 00:07:34.115 Asynchronous Event Request (0Ch): Supported 00:07:34.115 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:34.115 Directive Send (19h): Supported 00:07:34.115 Directive Receive (1Ah): Supported 00:07:34.115 Virtualization Management (1Ch): Supported 00:07:34.115 Doorbell Buffer Config (7Ch): Supported 00:07:34.115 Format NVM (80h): Supported LBA-Change 00:07:34.115 I/O Commands 00:07:34.115 ------------ 00:07:34.115 Flush (00h): Supported LBA-Change 00:07:34.115 Write (01h): Supported LBA-Change 00:07:34.115 Read (02h): Supported 00:07:34.115 Compare (05h): Supported 00:07:34.115 Write Zeroes (08h): Supported LBA-Change 00:07:34.115 Dataset Management (09h): Supported LBA-Change 00:07:34.115 Unknown (0Ch): Supported 00:07:34.115 Unknown (12h): Supported 00:07:34.115 Copy (19h): Supported LBA-Change 00:07:34.115 Unknown (1Dh): Supported LBA-Change 00:07:34.115 00:07:34.115 Error Log 00:07:34.115 ========= 00:07:34.115 00:07:34.115 Arbitration 00:07:34.115 =========== 00:07:34.115 Arbitration Burst: no limit 00:07:34.115 00:07:34.115 Power Management 00:07:34.115 ================ 00:07:34.115 Number of Power States: 1 00:07:34.115 Current Power State: Power State #0 00:07:34.115 Power State #0: 00:07:34.115 Max Power: 25.00 W 00:07:34.115 Non-Operational State: Operational 00:07:34.115 Entry Latency: 16 microseconds 00:07:34.115 Exit Latency: 4 microseconds 00:07:34.115 Relative Read Throughput: 0 00:07:34.115 Relative Read Latency: 0 00:07:34.115 Relative Write Throughput: 0 00:07:34.115 Relative Write Latency: 0 00:07:34.115 Idle Power[2024-09-28 10:28:08.758761] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 76019 terminated unexpected 00:07:34.115 : Not Reported 00:07:34.115 Active Power: Not Reported 00:07:34.115 Non-Operational Permissive Mode: Not Supported 00:07:34.115 00:07:34.115 Health Information 00:07:34.115 ================== 00:07:34.115 Critical Warnings: 00:07:34.115 Available Spare Space: OK 00:07:34.115 Temperature: OK 00:07:34.115 Device Reliability: OK 00:07:34.115 Read Only: No 00:07:34.115 Volatile Memory Backup: OK 00:07:34.115 Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.115 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:34.115 Available Spare: 0% 00:07:34.115 Available Spare Threshold: 0% 00:07:34.115 Life Percentage Used: 0% 00:07:34.115 Data Units Read: 642 00:07:34.115 Data Units Written: 570 00:07:34.115 Host Read Commands: 33356 00:07:34.115 Host Write Commands: 33142 00:07:34.115 Controller Busy Time: 0 minutes 00:07:34.115 Power Cycles: 0 00:07:34.115 Power On Hours: 0 hours 00:07:34.115 Unsafe Shutdowns: 0 00:07:34.115 Unrecoverable Media Errors: 0 00:07:34.115 Lifetime Error Log Entries: 0 00:07:34.115 Warning Temperature Time: 0 minutes 00:07:34.115 Critical Temperature Time: 0 minutes 00:07:34.115 00:07:34.115 Number of Queues 00:07:34.115 ================ 00:07:34.115 Number of I/O Submission Queues: 64 00:07:34.115 Number of I/O Completion Queues: 64 00:07:34.115 00:07:34.115 ZNS Specific Controller Data 00:07:34.115 ============================ 00:07:34.115 Zone Append Size Limit: 0 00:07:34.115 00:07:34.115 00:07:34.115 Active Namespaces 00:07:34.115 ================= 00:07:34.115 Namespace ID:1 00:07:34.115 Error Recovery Timeout: Unlimited 00:07:34.115 Command Set Identifier: NVM (00h) 00:07:34.115 Deallocate: Supported 00:07:34.116 Deallocated/Unwritten Error: Supported 00:07:34.116 Deallocated Read Value: All 0x00 00:07:34.116 Deallocate in Write Zeroes: Not Supported 00:07:34.116 Deallocated Guard Field: 0xFFFF 00:07:34.116 Flush: Supported 00:07:34.116 Reservation: Not Supported 00:07:34.116 Metadata Transferred as: Separate Metadata Buffer 00:07:34.116 Namespace Sharing Capabilities: Private 00:07:34.116 Size (in LBAs): 1548666 (5GiB) 00:07:34.116 Capacity (in LBAs): 1548666 (5GiB) 00:07:34.116 Utilization (in LBAs): 1548666 (5GiB) 00:07:34.116 Thin Provisioning: Not Supported 00:07:34.116 Per-NS Atomic Units: No 00:07:34.116 Maximum Single Source Range Length: 128 00:07:34.116 Maximum Copy Length: 128 00:07:34.116 Maximum Source Range Count: 128 00:07:34.116 NGUID/EUI64 Never Reused: No 00:07:34.116 Namespace Write Protected: No 00:07:34.116 Number of LBA Formats: 8 00:07:34.116 Current LBA Format: LBA Format #07 00:07:34.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.116 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.116 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.116 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.116 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.116 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.116 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.116 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.116 00:07:34.116 NVM Specific Namespace Data 00:07:34.116 =========================== 00:07:34.116 Logical Block Storage Tag Mask: 0 00:07:34.116 Protection Information Capabilities: 00:07:34.116 16b Guard Protection Information Storage Tag Support: No 00:07:34.116 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.116 Storage Tag Check Read Support: No 00:07:34.116 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.116 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.116 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.116 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.116 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.116 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.116 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.116 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.116 ===================================================== 00:07:34.116 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:34.116 ===================================================== 00:07:34.116 Controller Capabilities/Features 00:07:34.116 ================================ 00:07:34.116 Vendor ID: 1b36 00:07:34.116 Subsystem Vendor ID: 1af4 00:07:34.116 Serial Number: 12341 00:07:34.116 Model Number: QEMU NVMe Ctrl 00:07:34.116 Firmware Version: 8.0.0 00:07:34.116 Recommended Arb Burst: 6 00:07:34.116 IEEE OUI Identifier: 00 54 52 00:07:34.116 Multi-path I/O 00:07:34.116 May have multiple subsystem ports: No 00:07:34.116 May have multiple controllers: No 00:07:34.116 Associated with SR-IOV VF: No 00:07:34.116 Max Data Transfer Size: 524288 00:07:34.116 Max Number of Namespaces: 256 00:07:34.116 Max Number of I/O Queues: 64 00:07:34.116 NVMe Specification Version (VS): 1.4 00:07:34.116 NVMe Specification Version (Identify): 1.4 00:07:34.116 Maximum Queue Entries: 2048 00:07:34.116 Contiguous Queues Required: Yes 00:07:34.116 Arbitration Mechanisms Supported 00:07:34.116 Weighted Round Robin: Not Supported 00:07:34.116 Vendor Specific: Not Supported 00:07:34.116 Reset Timeout: 7500 ms 00:07:34.116 Doorbell Stride: 4 bytes 00:07:34.116 NVM Subsystem Reset: Not Supported 00:07:34.116 Command Sets Supported 00:07:34.116 NVM Command Set: Supported 00:07:34.116 Boot Partition: Not Supported 00:07:34.116 Memory Page Size Minimum: 4096 bytes 00:07:34.116 Memory Page Size Maximum: 65536 bytes 00:07:34.116 Persistent Memory Region: Not Supported 00:07:34.116 Optional Asynchronous Events Supported 00:07:34.116 Namespace Attribute Notices: Supported 00:07:34.116 Firmware Activation Notices: Not Supported 00:07:34.116 ANA Change Notices: Not Supported 00:07:34.116 PLE Aggregate Log Change Notices: Not Supported 00:07:34.116 LBA Status Info Alert Notices: Not Supported 00:07:34.116 EGE Aggregate Log Change Notices: Not Supported 00:07:34.116 Normal NVM Subsystem Shutdown event: Not Supported 00:07:34.116 Zone Descriptor Change Notices: Not Supported 00:07:34.116 Discovery Log Change Notices: Not Supported 00:07:34.116 Controller Attributes 00:07:34.116 128-bit Host Identifier: Not Supported 00:07:34.116 Non-Operational Permissive Mode: Not Supported 00:07:34.116 NVM Sets: Not Supported 00:07:34.116 Read Recovery Levels: Not Supported 00:07:34.116 Endurance Groups: Not Supported 00:07:34.116 Predictable Latency Mode: Not Supported 00:07:34.116 Traffic Based Keep ALive: Not Supported 00:07:34.116 Namespace Granularity: Not Supported 00:07:34.116 SQ Associations: Not Supported 00:07:34.116 UUID List: Not Supported 00:07:34.116 Multi-Domain Subsystem: Not Supported 00:07:34.116 Fixed Capacity Management: Not Supported 00:07:34.116 Variable Capacity Management: Not Supported 00:07:34.116 Delete Endurance Group: Not Supported 00:07:34.116 Delete NVM Set: Not Supported 00:07:34.116 Extended LBA Formats Supported: Supported 00:07:34.116 Flexible Data Placement Supported: Not Supported 00:07:34.116 00:07:34.116 Controller Memory Buffer Support 00:07:34.116 ================================ 00:07:34.116 Supported: No 00:07:34.116 00:07:34.116 Persistent Memory Region Support 00:07:34.116 ================================ 00:07:34.116 Supported: No 00:07:34.116 00:07:34.116 Admin Command Set Attributes 00:07:34.116 ============================ 00:07:34.116 Security Send/Receive: Not Supported 00:07:34.116 Format NVM: Supported 00:07:34.116 Firmware Activate/Download: Not Supported 00:07:34.116 Namespace Management: Supported 00:07:34.116 Device Self-Test: Not Supported 00:07:34.116 Directives: Supported 00:07:34.116 NVMe-MI: Not Supported 00:07:34.116 Virtualization Management: Not Supported 00:07:34.116 Doorbell Buffer Config: Supported 00:07:34.116 Get LBA Status Capability: Not Supported 00:07:34.116 Command & Feature Lockdown Capability: Not Supported 00:07:34.116 Abort Command Limit: 4 00:07:34.116 Async Event Request Limit: 4 00:07:34.116 Number of Firmware Slots: N/A 00:07:34.116 Firmware Slot 1 Read-Only: N/A 00:07:34.116 Firmware Activation Without Reset: N/A 00:07:34.116 Multiple Update Detection Support: N/A 00:07:34.116 Firmware Update Granularity: No Information Provided 00:07:34.116 Per-Namespace SMART Log: Yes 00:07:34.116 Asymmetric Namespace Access Log Page: Not Supported 00:07:34.117 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:34.117 Command Effects Log Page: Supported 00:07:34.117 Get Log Page Extended Data: Supported 00:07:34.117 Telemetry Log Pages: Not Supported 00:07:34.117 Persistent Event Log Pages: Not Supported 00:07:34.117 Supported Log Pages Log Page: May Support 00:07:34.117 Commands Supported & Effects Log Page: Not Supported 00:07:34.117 Feature Identifiers & Effects Log Page:May Support 00:07:34.117 NVMe-MI Commands & Effects Log Page: May Support 00:07:34.117 Data Area 4 for Telemetry Log: Not Supported 00:07:34.117 Error Log Page Entries Supported: 1 00:07:34.117 Keep Alive: Not Supported 00:07:34.117 00:07:34.117 NVM Command Set Attributes 00:07:34.117 ========================== 00:07:34.117 Submission Queue Entry Size 00:07:34.117 Max: 64 00:07:34.117 Min: 64 00:07:34.117 Completion Queue Entry Size 00:07:34.117 Max: 16 00:07:34.117 Min: 16 00:07:34.117 Number of Namespaces: 256 00:07:34.117 Compare Command: Supported 00:07:34.117 Write Uncorrectable Command: Not Supported 00:07:34.117 Dataset Management Command: Supported 00:07:34.117 Write Zeroes Command: Supported 00:07:34.117 Set Features Save Field: Supported 00:07:34.117 Reservations: Not Supported 00:07:34.117 Timestamp: Supported 00:07:34.117 Copy: Supported 00:07:34.117 Volatile Write Cache: Present 00:07:34.117 Atomic Write Unit (Normal): 1 00:07:34.117 Atomic Write Unit (PFail): 1 00:07:34.117 Atomic Compare & Write Unit: 1 00:07:34.117 Fused Compare & Write: Not Supported 00:07:34.117 Scatter-Gather List 00:07:34.117 SGL Command Set: Supported 00:07:34.117 SGL Keyed: Not Supported 00:07:34.117 SGL Bit Bucket Descriptor: Not Supported 00:07:34.117 SGL Metadata Pointer: Not Supported 00:07:34.117 Oversized SGL: Not Supported 00:07:34.117 SGL Metadata Address: Not Supported 00:07:34.117 SGL Offset: Not Supported 00:07:34.117 Transport SGL Data Block: Not Supported 00:07:34.117 Replay Protected Memory Block: Not Supported 00:07:34.117 00:07:34.117 Firmware Slot Information 00:07:34.117 ========================= 00:07:34.117 Active slot: 1 00:07:34.117 Slot 1 Firmware Revision: 1.0 00:07:34.117 00:07:34.117 00:07:34.117 Commands Supported and Effects 00:07:34.117 ============================== 00:07:34.117 Admin Commands 00:07:34.117 -------------- 00:07:34.117 Delete I/O Submission Queue (00h): Supported 00:07:34.117 Create I/O Submission Queue (01h): Supported 00:07:34.117 Get Log Page (02h): Supported 00:07:34.117 Delete I/O Completion Queue (04h): Supported 00:07:34.117 Create I/O Completion Queue (05h): Supported 00:07:34.117 Identify (06h): Supported 00:07:34.117 Abort (08h): Supported 00:07:34.117 Set Features (09h): Supported 00:07:34.117 Get Features (0Ah): Supported 00:07:34.117 Asynchronous Event Request (0Ch): Supported 00:07:34.117 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:34.117 Directive Send (19h): Supported 00:07:34.117 Directive Receive (1Ah): Supported 00:07:34.117 Virtualization Management (1Ch): Supported 00:07:34.117 Doorbell Buffer Config (7Ch): Supported 00:07:34.117 Format NVM (80h): Supported LBA-Change 00:07:34.117 I/O Commands 00:07:34.117 ------------ 00:07:34.117 Flush (00h): Supported LBA-Change 00:07:34.117 Write (01h): Supported LBA-Change 00:07:34.117 Read (02h): Supported 00:07:34.117 Compare (05h): Supported 00:07:34.117 Write Zeroes (08h): Supported LBA-Change 00:07:34.117 Dataset Management (09h): Supported LBA-Change 00:07:34.117 Unknown (0Ch): Supported 00:07:34.117 Unknown (12h): Supported 00:07:34.117 Copy (19h): Supported LBA-Change 00:07:34.117 Unknown (1Dh): Supported LBA-Change 00:07:34.117 00:07:34.117 Error Log 00:07:34.117 ========= 00:07:34.117 00:07:34.117 Arbitration 00:07:34.117 =========== 00:07:34.117 Arbitration Burst: no limit 00:07:34.117 00:07:34.117 Power Management 00:07:34.117 ================ 00:07:34.117 Number of Power States: 1 00:07:34.117 Current Power State: Power State #0 00:07:34.117 Power State #0: 00:07:34.117 Max Power: 25.00 W 00:07:34.117 Non-Operational State: Operational 00:07:34.117 Entry Latency: 16 microseconds 00:07:34.117 Exit Latency: 4 microseconds 00:07:34.117 Relative Read Throughput: 0 00:07:34.117 Relative Read Latency: 0 00:07:34.117 Relative Write Throughput: 0 00:07:34.117 Relative Write Latency: 0 00:07:34.117 Idle Power: Not Reported 00:07:34.117 Active Power: Not Reported 00:07:34.117 Non-Operational Permissive Mode: Not Supported 00:07:34.117 00:07:34.117 Health Information 00:07:34.117 ================== 00:07:34.117 Critical Warnings: 00:07:34.117 Available Spare Space: OK 00:07:34.117 Temperature: [2024-09-28 10:28:08.759345] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 76019 terminated unexpected 00:07:34.117 OK 00:07:34.117 Device Reliability: OK 00:07:34.117 Read Only: No 00:07:34.117 Volatile Memory Backup: OK 00:07:34.117 Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.117 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:34.117 Available Spare: 0% 00:07:34.117 Available Spare Threshold: 0% 00:07:34.117 Life Percentage Used: 0% 00:07:34.117 Data Units Read: 933 00:07:34.117 Data Units Written: 807 00:07:34.117 Host Read Commands: 48819 00:07:34.117 Host Write Commands: 47713 00:07:34.117 Controller Busy Time: 0 minutes 00:07:34.117 Power Cycles: 0 00:07:34.117 Power On Hours: 0 hours 00:07:34.117 Unsafe Shutdowns: 0 00:07:34.117 Unrecoverable Media Errors: 0 00:07:34.117 Lifetime Error Log Entries: 0 00:07:34.117 Warning Temperature Time: 0 minutes 00:07:34.117 Critical Temperature Time: 0 minutes 00:07:34.117 00:07:34.117 Number of Queues 00:07:34.117 ================ 00:07:34.117 Number of I/O Submission Queues: 64 00:07:34.117 Number of I/O Completion Queues: 64 00:07:34.117 00:07:34.117 ZNS Specific Controller Data 00:07:34.117 ============================ 00:07:34.117 Zone Append Size Limit: 0 00:07:34.117 00:07:34.117 00:07:34.117 Active Namespaces 00:07:34.117 ================= 00:07:34.117 Namespace ID:1 00:07:34.117 Error Recovery Timeout: Unlimited 00:07:34.117 Command Set Identifier: NVM (00h) 00:07:34.117 Deallocate: Supported 00:07:34.117 Deallocated/Unwritten Error: Supported 00:07:34.117 Deallocated Read Value: All 0x00 00:07:34.117 Deallocate in Write Zeroes: Not Supported 00:07:34.117 Deallocated Guard Field: 0xFFFF 00:07:34.117 Flush: Supported 00:07:34.117 Reservation: Not Supported 00:07:34.117 Namespace Sharing Capabilities: Private 00:07:34.117 Size (in LBAs): 1310720 (5GiB) 00:07:34.117 Capacity (in LBAs): 1310720 (5GiB) 00:07:34.117 Utilization (in LBAs): 1310720 (5GiB) 00:07:34.117 Thin Provisioning: Not Supported 00:07:34.117 Per-NS Atomic Units: No 00:07:34.118 Maximum Single Source Range Length: 128 00:07:34.118 Maximum Copy Length: 128 00:07:34.118 Maximum Source Range Count: 128 00:07:34.118 NGUID/EUI64 Never Reused: No 00:07:34.118 Namespace Write Protected: No 00:07:34.118 Number of LBA Formats: 8 00:07:34.118 Current LBA Format: LBA Format #04 00:07:34.118 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.118 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.118 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.118 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.118 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.118 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.118 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.118 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.118 00:07:34.118 NVM Specific Namespace Data 00:07:34.118 =========================== 00:07:34.118 Logical Block Storage Tag Mask: 0 00:07:34.118 Protection Information Capabilities: 00:07:34.118 16b Guard Protection Information Storage Tag Support: No 00:07:34.118 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.118 Storage Tag Check Read Support: No 00:07:34.118 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.118 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.118 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.118 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.118 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.118 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.118 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.118 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.118 ===================================================== 00:07:34.118 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:34.118 ===================================================== 00:07:34.118 Controller Capabilities/Features 00:07:34.118 ================================ 00:07:34.118 Vendor ID: 1b36 00:07:34.118 Subsystem Vendor ID: 1af4 00:07:34.118 Serial Number: 12343 00:07:34.118 Model Number: QEMU NVMe Ctrl 00:07:34.118 Firmware Version: 8.0.0 00:07:34.118 Recommended Arb Burst: 6 00:07:34.118 IEEE OUI Identifier: 00 54 52 00:07:34.118 Multi-path I/O 00:07:34.118 May have multiple subsystem ports: No 00:07:34.118 May have multiple controllers: Yes 00:07:34.118 Associated with SR-IOV VF: No 00:07:34.118 Max Data Transfer Size: 524288 00:07:34.118 Max Number of Namespaces: 256 00:07:34.118 Max Number of I/O Queues: 64 00:07:34.118 NVMe Specification Version (VS): 1.4 00:07:34.118 NVMe Specification Version (Identify): 1.4 00:07:34.118 Maximum Queue Entries: 2048 00:07:34.118 Contiguous Queues Required: Yes 00:07:34.118 Arbitration Mechanisms Supported 00:07:34.118 Weighted Round Robin: Not Supported 00:07:34.118 Vendor Specific: Not Supported 00:07:34.118 Reset Timeout: 7500 ms 00:07:34.118 Doorbell Stride: 4 bytes 00:07:34.118 NVM Subsystem Reset: Not Supported 00:07:34.118 Command Sets Supported 00:07:34.118 NVM Command Set: Supported 00:07:34.118 Boot Partition: Not Supported 00:07:34.118 Memory Page Size Minimum: 4096 bytes 00:07:34.118 Memory Page Size Maximum: 65536 bytes 00:07:34.118 Persistent Memory Region: Not Supported 00:07:34.118 Optional Asynchronous Events Supported 00:07:34.118 Namespace Attribute Notices: Supported 00:07:34.118 Firmware Activation Notices: Not Supported 00:07:34.118 ANA Change Notices: Not Supported 00:07:34.118 PLE Aggregate Log Change Notices: Not Supported 00:07:34.118 LBA Status Info Alert Notices: Not Supported 00:07:34.118 EGE Aggregate Log Change Notices: Not Supported 00:07:34.118 Normal NVM Subsystem Shutdown event: Not Supported 00:07:34.118 Zone Descriptor Change Notices: Not Supported 00:07:34.118 Discovery Log Change Notices: Not Supported 00:07:34.118 Controller Attributes 00:07:34.118 128-bit Host Identifier: Not Supported 00:07:34.118 Non-Operational Permissive Mode: Not Supported 00:07:34.118 NVM Sets: Not Supported 00:07:34.118 Read Recovery Levels: Not Supported 00:07:34.118 Endurance Groups: Supported 00:07:34.118 Predictable Latency Mode: Not Supported 00:07:34.118 Traffic Based Keep ALive: Not Supported 00:07:34.118 Namespace Granularity: Not Supported 00:07:34.118 SQ Associations: Not Supported 00:07:34.118 UUID List: Not Supported 00:07:34.118 Multi-Domain Subsystem: Not Supported 00:07:34.118 Fixed Capacity Management: Not Supported 00:07:34.118 Variable Capacity Management: Not Supported 00:07:34.118 Delete Endurance Group: Not Supported 00:07:34.118 Delete NVM Set: Not Supported 00:07:34.118 Extended LBA Formats Supported: Supported 00:07:34.118 Flexible Data Placement Supported: Supported 00:07:34.118 00:07:34.118 Controller Memory Buffer Support 00:07:34.118 ================================ 00:07:34.118 Supported: No 00:07:34.118 00:07:34.118 Persistent Memory Region Support 00:07:34.118 ================================ 00:07:34.118 Supported: No 00:07:34.118 00:07:34.118 Admin Command Set Attributes 00:07:34.118 ============================ 00:07:34.118 Security Send/Receive: Not Supported 00:07:34.118 Format NVM: Supported 00:07:34.118 Firmware Activate/Download: Not Supported 00:07:34.118 Namespace Management: Supported 00:07:34.118 Device Self-Test: Not Supported 00:07:34.118 Directives: Supported 00:07:34.118 NVMe-MI: Not Supported 00:07:34.118 Virtualization Management: Not Supported 00:07:34.118 Doorbell Buffer Config: Supported 00:07:34.118 Get LBA Status Capability: Not Supported 00:07:34.118 Command & Feature Lockdown Capability: Not Supported 00:07:34.118 Abort Command Limit: 4 00:07:34.118 Async Event Request Limit: 4 00:07:34.118 Number of Firmware Slots: N/A 00:07:34.118 Firmware Slot 1 Read-Only: N/A 00:07:34.118 Firmware Activation Without Reset: N/A 00:07:34.118 Multiple Update Detection Support: N/A 00:07:34.118 Firmware Update Granularity: No Information Provided 00:07:34.118 Per-Namespace SMART Log: Yes 00:07:34.118 Asymmetric Namespace Access Log Page: Not Supported 00:07:34.118 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:34.118 Command Effects Log Page: Supported 00:07:34.118 Get Log Page Extended Data: Supported 00:07:34.118 Telemetry Log Pages: Not Supported 00:07:34.118 Persistent Event Log Pages: Not Supported 00:07:34.118 Supported Log Pages Log Page: May Support 00:07:34.118 Commands Supported & Effects Log Page: Not Supported 00:07:34.118 Feature Identifiers & Effects Log Page:May Support 00:07:34.118 NVMe-MI Commands & Effects Log Page: May Support 00:07:34.118 Data Area 4 for Telemetry Log: Not Supported 00:07:34.118 Error Log Page Entries Supported: 1 00:07:34.118 Keep Alive: Not Supported 00:07:34.118 00:07:34.118 NVM Command Set Attributes 00:07:34.118 ========================== 00:07:34.118 Submission Queue Entry Size 00:07:34.118 Max: 64 00:07:34.118 Min: 64 00:07:34.118 Completion Queue Entry Size 00:07:34.118 Max: 16 00:07:34.118 Min: 16 00:07:34.118 Number of Namespaces: 256 00:07:34.118 Compare Command: Supported 00:07:34.118 Write Uncorrectable Command: Not Supported 00:07:34.119 Dataset Management Command: Supported 00:07:34.119 Write Zeroes Command: Supported 00:07:34.119 Set Features Save Field: Supported 00:07:34.119 Reservations: Not Supported 00:07:34.119 Timestamp: Supported 00:07:34.119 Copy: Supported 00:07:34.119 Volatile Write Cache: Present 00:07:34.119 Atomic Write Unit (Normal): 1 00:07:34.119 Atomic Write Unit (PFail): 1 00:07:34.119 Atomic Compare & Write Unit: 1 00:07:34.119 Fused Compare & Write: Not Supported 00:07:34.119 Scatter-Gather List 00:07:34.119 SGL Command Set: Supported 00:07:34.119 SGL Keyed: Not Supported 00:07:34.119 SGL Bit Bucket Descriptor: Not Supported 00:07:34.119 SGL Metadata Pointer: Not Supported 00:07:34.119 Oversized SGL: Not Supported 00:07:34.119 SGL Metadata Address: Not Supported 00:07:34.119 SGL Offset: Not Supported 00:07:34.119 Transport SGL Data Block: Not Supported 00:07:34.119 Replay Protected Memory Block: Not Supported 00:07:34.119 00:07:34.119 Firmware Slot Information 00:07:34.119 ========================= 00:07:34.119 Active slot: 1 00:07:34.119 Slot 1 Firmware Revision: 1.0 00:07:34.119 00:07:34.119 00:07:34.119 Commands Supported and Effects 00:07:34.119 ============================== 00:07:34.119 Admin Commands 00:07:34.119 -------------- 00:07:34.119 Delete I/O Submission Queue (00h): Supported 00:07:34.119 Create I/O Submission Queue (01h): Supported 00:07:34.119 Get Log Page (02h): Supported 00:07:34.119 Delete I/O Completion Queue (04h): Supported 00:07:34.119 Create I/O Completion Queue (05h): Supported 00:07:34.119 Identify (06h): Supported 00:07:34.119 Abort (08h): Supported 00:07:34.119 Set Features (09h): Supported 00:07:34.119 Get Features (0Ah): Supported 00:07:34.119 Asynchronous Event Request (0Ch): Supported 00:07:34.119 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:34.119 Directive Send (19h): Supported 00:07:34.119 Directive Receive (1Ah): Supported 00:07:34.119 Virtualization Management (1Ch): Supported 00:07:34.119 Doorbell Buffer Config (7Ch): Supported 00:07:34.119 Format NVM (80h): Supported LBA-Change 00:07:34.119 I/O Commands 00:07:34.119 ------------ 00:07:34.119 Flush (00h): Supported LBA-Change 00:07:34.119 Write (01h): Supported LBA-Change 00:07:34.119 Read (02h): Supported 00:07:34.119 Compare (05h): Supported 00:07:34.119 Write Zeroes (08h): Supported LBA-Change 00:07:34.119 Dataset Management (09h): Supported LBA-Change 00:07:34.119 Unknown (0Ch): Supported 00:07:34.119 Unknown (12h): Supported 00:07:34.119 Copy (19h): Supported LBA-Change 00:07:34.119 Unknown (1Dh): Supported LBA-Change 00:07:34.119 00:07:34.119 Error Log 00:07:34.119 ========= 00:07:34.119 00:07:34.119 Arbitration 00:07:34.119 =========== 00:07:34.119 Arbitration Burst: no limit 00:07:34.119 00:07:34.119 Power Management 00:07:34.119 ================ 00:07:34.119 Number of Power States: 1 00:07:34.119 Current Power State: Power State #0 00:07:34.119 Power State #0: 00:07:34.119 Max Power: 25.00 W 00:07:34.119 Non-Operational State: Operational 00:07:34.119 Entry Latency: 16 microseconds 00:07:34.119 Exit Latency: 4 microseconds 00:07:34.119 Relative Read Throughput: 0 00:07:34.119 Relative Read Latency: 0 00:07:34.119 Relative Write Throughput: 0 00:07:34.119 Relative Write Latency: 0 00:07:34.119 Idle Power: Not Reported 00:07:34.119 Active Power: Not Reported 00:07:34.119 Non-Operational Permissive Mode: Not Supported 00:07:34.119 00:07:34.119 Health Information 00:07:34.119 ================== 00:07:34.119 Critical Warnings: 00:07:34.119 Available Spare Space: OK 00:07:34.119 Temperature: OK 00:07:34.119 Device Reliability: OK 00:07:34.119 Read Only: No 00:07:34.119 Volatile Memory Backup: OK 00:07:34.119 Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.119 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:34.119 Available Spare: 0% 00:07:34.119 Available Spare Threshold: 0% 00:07:34.119 Life Percentage Used: 0% 00:07:34.119 Data Units Read: 786 00:07:34.119 Data Units Written: 715 00:07:34.119 Host Read Commands: 35018 00:07:34.119 Host Write Commands: 34441 00:07:34.119 Controller Busy Time: 0 minutes 00:07:34.119 Power Cycles: 0 00:07:34.119 Power On Hours: 0 hours 00:07:34.119 Unsafe Shutdowns: 0 00:07:34.119 Unrecoverable Media Errors: 0 00:07:34.119 Lifetime Error Log Entries: 0 00:07:34.119 Warning Temperature Time: 0 minutes 00:07:34.119 Critical Temperature Time: 0 minutes 00:07:34.119 00:07:34.119 Number of Queues 00:07:34.119 ================ 00:07:34.119 Number of I/O Submission Queues: 64 00:07:34.119 Number of I/O Completion Queues: 64 00:07:34.119 00:07:34.119 ZNS Specific Controller Data 00:07:34.119 ============================ 00:07:34.119 Zone Append Size Limit: 0 00:07:34.119 00:07:34.119 00:07:34.119 Active Namespaces 00:07:34.119 ================= 00:07:34.119 Namespace ID:1 00:07:34.119 Error Recovery Timeout: Unlimited 00:07:34.119 Command Set Identifier: NVM (00h) 00:07:34.119 Deallocate: Supported 00:07:34.119 Deallocated/Unwritten Error: Supported 00:07:34.119 Deallocated Read Value: All 0x00 00:07:34.119 Deallocate in Write Zeroes: Not Supported 00:07:34.119 Deallocated Guard Field: 0xFFFF 00:07:34.119 Flush: Supported 00:07:34.119 Reservation: Not Supported 00:07:34.119 Namespace Sharing Capabilities: Multiple Controllers 00:07:34.119 Size (in LBAs): 262144 (1GiB) 00:07:34.119 Capacity (in LBAs): 262144 (1GiB) 00:07:34.119 Utilization (in LBAs): 262144 (1GiB) 00:07:34.119 Thin Provisioning: Not Supported 00:07:34.119 Per-NS Atomic Units: No 00:07:34.119 Maximum Single Source Range Length: 128 00:07:34.119 Maximum Copy Length: 128 00:07:34.119 Maximum Source Range Count: 128 00:07:34.119 NGUID/EUI64 Never Reused: No 00:07:34.119 Namespace Write Protected: No 00:07:34.119 Endurance group ID: 1 00:07:34.119 Number of LBA Formats: 8 00:07:34.119 Current LBA Format: LBA Format #04 00:07:34.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.119 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.119 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.119 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.119 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.119 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.119 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.119 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.119 00:07:34.119 Get Feature FDP: 00:07:34.119 ================ 00:07:34.119 Enabled: Yes 00:07:34.119 FDP configuration index: 0 00:07:34.119 00:07:34.119 FDP configurations log page 00:07:34.119 =========================== 00:07:34.119 Number of FDP configurations: 1 00:07:34.119 Version: 0 00:07:34.119 Size: 112 00:07:34.119 FDP Configuration Descriptor: 0 00:07:34.119 Descriptor Size: 96 00:07:34.119 Reclaim Group Identifier format: 2 00:07:34.119 FDP Volatile Write Cache: Not Present 00:07:34.119 FDP Configuration: Valid 00:07:34.119 Vendor Specific Size: 0 00:07:34.119 Number of Reclaim Groups: 2 00:07:34.119 Number of Recalim Unit Handles: 8 00:07:34.119 Max Placement Identifiers: 128 00:07:34.119 Number of Namespaces Suppprted: 256 00:07:34.119 Reclaim unit Nominal Size: 6000000 bytes 00:07:34.119 Estimated Reclaim Unit Time Limit: Not Reported 00:07:34.119 RUH Desc #000: RUH Type: Initially Isolated 00:07:34.119 RUH Desc #001: RUH Type: Initially Isolated 00:07:34.119 RUH Desc #002: RUH Type: Initially Isolated 00:07:34.119 RUH Desc #003: RUH Type: Initially Isolated 00:07:34.119 RUH Desc #004: RUH Type: Initially Isolated 00:07:34.119 RUH Desc #005: RUH Type: Initially Isolated 00:07:34.119 RUH Desc #006: RUH Type: Initially Isolated 00:07:34.119 RUH Desc #007: RUH Type: Initially Isolated 00:07:34.119 00:07:34.119 FDP reclaim unit handle usage log page 00:07:34.119 ====================================== 00:07:34.119 Number of Reclaim Unit Handles: 8 00:07:34.119 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:34.119 RUH Usage Desc #001: RUH Attributes: Unused 00:07:34.119 RUH Usage Desc #002: RUH Attributes: Unused 00:07:34.120 RUH Usage Desc #003: RUH Attributes: Unused 00:07:34.120 RUH Usage Desc #004: RUH Attributes: Unused 00:07:34.120 RUH Usage Desc #005: RUH Attributes: Unused 00:07:34.120 RUH Usage Desc #006: RUH Attributes: Unused 00:07:34.120 RUH Usage Desc #007: RUH Attributes: Unused 00:07:34.120 00:07:34.120 FDP statistics log page 00:07:34.120 ======================= 00:07:34.120 Host bytes with metadata written: 457613312 00:07:34.120 Media[2024-09-28 10:28:08.760522] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 76019 terminated unexpected 00:07:34.120 bytes with metadata written: 457658368 00:07:34.120 Media bytes erased: 0 00:07:34.120 00:07:34.120 FDP events log page 00:07:34.120 =================== 00:07:34.120 Number of FDP events: 0 00:07:34.120 00:07:34.120 NVM Specific Namespace Data 00:07:34.120 =========================== 00:07:34.120 Logical Block Storage Tag Mask: 0 00:07:34.120 Protection Information Capabilities: 00:07:34.120 16b Guard Protection Information Storage Tag Support: No 00:07:34.120 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.120 Storage Tag Check Read Support: No 00:07:34.120 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.120 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.120 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.120 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.120 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.120 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.120 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.120 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.120 ===================================================== 00:07:34.120 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:34.120 ===================================================== 00:07:34.120 Controller Capabilities/Features 00:07:34.120 ================================ 00:07:34.120 Vendor ID: 1b36 00:07:34.120 Subsystem Vendor ID: 1af4 00:07:34.120 Serial Number: 12342 00:07:34.120 Model Number: QEMU NVMe Ctrl 00:07:34.120 Firmware Version: 8.0.0 00:07:34.120 Recommended Arb Burst: 6 00:07:34.120 IEEE OUI Identifier: 00 54 52 00:07:34.120 Multi-path I/O 00:07:34.120 May have multiple subsystem ports: No 00:07:34.120 May have multiple controllers: No 00:07:34.120 Associated with SR-IOV VF: No 00:07:34.120 Max Data Transfer Size: 524288 00:07:34.120 Max Number of Namespaces: 256 00:07:34.120 Max Number of I/O Queues: 64 00:07:34.120 NVMe Specification Version (VS): 1.4 00:07:34.120 NVMe Specification Version (Identify): 1.4 00:07:34.120 Maximum Queue Entries: 2048 00:07:34.120 Contiguous Queues Required: Yes 00:07:34.120 Arbitration Mechanisms Supported 00:07:34.120 Weighted Round Robin: Not Supported 00:07:34.120 Vendor Specific: Not Supported 00:07:34.120 Reset Timeout: 7500 ms 00:07:34.120 Doorbell Stride: 4 bytes 00:07:34.120 NVM Subsystem Reset: Not Supported 00:07:34.120 Command Sets Supported 00:07:34.120 NVM Command Set: Supported 00:07:34.120 Boot Partition: Not Supported 00:07:34.120 Memory Page Size Minimum: 4096 bytes 00:07:34.120 Memory Page Size Maximum: 65536 bytes 00:07:34.120 Persistent Memory Region: Not Supported 00:07:34.120 Optional Asynchronous Events Supported 00:07:34.120 Namespace Attribute Notices: Supported 00:07:34.120 Firmware Activation Notices: Not Supported 00:07:34.120 ANA Change Notices: Not Supported 00:07:34.120 PLE Aggregate Log Change Notices: Not Supported 00:07:34.120 LBA Status Info Alert Notices: Not Supported 00:07:34.120 EGE Aggregate Log Change Notices: Not Supported 00:07:34.120 Normal NVM Subsystem Shutdown event: Not Supported 00:07:34.120 Zone Descriptor Change Notices: Not Supported 00:07:34.120 Discovery Log Change Notices: Not Supported 00:07:34.120 Controller Attributes 00:07:34.120 128-bit Host Identifier: Not Supported 00:07:34.120 Non-Operational Permissive Mode: Not Supported 00:07:34.120 NVM Sets: Not Supported 00:07:34.120 Read Recovery Levels: Not Supported 00:07:34.120 Endurance Groups: Not Supported 00:07:34.120 Predictable Latency Mode: Not Supported 00:07:34.120 Traffic Based Keep ALive: Not Supported 00:07:34.120 Namespace Granularity: Not Supported 00:07:34.120 SQ Associations: Not Supported 00:07:34.120 UUID List: Not Supported 00:07:34.120 Multi-Domain Subsystem: Not Supported 00:07:34.120 Fixed Capacity Management: Not Supported 00:07:34.120 Variable Capacity Management: Not Supported 00:07:34.120 Delete Endurance Group: Not Supported 00:07:34.120 Delete NVM Set: Not Supported 00:07:34.120 Extended LBA Formats Supported: Supported 00:07:34.120 Flexible Data Placement Supported: Not Supported 00:07:34.120 00:07:34.120 Controller Memory Buffer Support 00:07:34.120 ================================ 00:07:34.120 Supported: No 00:07:34.120 00:07:34.120 Persistent Memory Region Support 00:07:34.120 ================================ 00:07:34.120 Supported: No 00:07:34.120 00:07:34.120 Admin Command Set Attributes 00:07:34.120 ============================ 00:07:34.120 Security Send/Receive: Not Supported 00:07:34.120 Format NVM: Supported 00:07:34.120 Firmware Activate/Download: Not Supported 00:07:34.120 Namespace Management: Supported 00:07:34.120 Device Self-Test: Not Supported 00:07:34.120 Directives: Supported 00:07:34.120 NVMe-MI: Not Supported 00:07:34.120 Virtualization Management: Not Supported 00:07:34.120 Doorbell Buffer Config: Supported 00:07:34.120 Get LBA Status Capability: Not Supported 00:07:34.120 Command & Feature Lockdown Capability: Not Supported 00:07:34.120 Abort Command Limit: 4 00:07:34.120 Async Event Request Limit: 4 00:07:34.120 Number of Firmware Slots: N/A 00:07:34.120 Firmware Slot 1 Read-Only: N/A 00:07:34.120 Firmware Activation Without Reset: N/A 00:07:34.120 Multiple Update Detection Support: N/A 00:07:34.120 Firmware Update Granularity: No Information Provided 00:07:34.120 Per-Namespace SMART Log: Yes 00:07:34.120 Asymmetric Namespace Access Log Page: Not Supported 00:07:34.120 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:34.120 Command Effects Log Page: Supported 00:07:34.120 Get Log Page Extended Data: Supported 00:07:34.120 Telemetry Log Pages: Not Supported 00:07:34.120 Persistent Event Log Pages: Not Supported 00:07:34.120 Supported Log Pages Log Page: May Support 00:07:34.120 Commands Supported & Effects Log Page: Not Supported 00:07:34.120 Feature Identifiers & Effects Log Page:May Support 00:07:34.120 NVMe-MI Commands & Effects Log Page: May Support 00:07:34.120 Data Area 4 for Telemetry Log: Not Supported 00:07:34.120 Error Log Page Entries Supported: 1 00:07:34.120 Keep Alive: Not Supported 00:07:34.120 00:07:34.120 NVM Command Set Attributes 00:07:34.120 ========================== 00:07:34.120 Submission Queue Entry Size 00:07:34.120 Max: 64 00:07:34.120 Min: 64 00:07:34.121 Completion Queue Entry Size 00:07:34.121 Max: 16 00:07:34.121 Min: 16 00:07:34.121 Number of Namespaces: 256 00:07:34.121 Compare Command: Supported 00:07:34.121 Write Uncorrectable Command: Not Supported 00:07:34.121 Dataset Management Command: Supported 00:07:34.121 Write Zeroes Command: Supported 00:07:34.121 Set Features Save Field: Supported 00:07:34.121 Reservations: Not Supported 00:07:34.121 Timestamp: Supported 00:07:34.121 Copy: Supported 00:07:34.121 Volatile Write Cache: Present 00:07:34.121 Atomic Write Unit (Normal): 1 00:07:34.121 Atomic Write Unit (PFail): 1 00:07:34.121 Atomic Compare & Write Unit: 1 00:07:34.121 Fused Compare & Write: Not Supported 00:07:34.121 Scatter-Gather List 00:07:34.121 SGL Command Set: Supported 00:07:34.121 SGL Keyed: Not Supported 00:07:34.121 SGL Bit Bucket Descriptor: Not Supported 00:07:34.121 SGL Metadata Pointer: Not Supported 00:07:34.121 Oversized SGL: Not Supported 00:07:34.121 SGL Metadata Address: Not Supported 00:07:34.121 SGL Offset: Not Supported 00:07:34.121 Transport SGL Data Block: Not Supported 00:07:34.121 Replay Protected Memory Block: Not Supported 00:07:34.121 00:07:34.121 Firmware Slot Information 00:07:34.121 ========================= 00:07:34.121 Active slot: 1 00:07:34.121 Slot 1 Firmware Revision: 1.0 00:07:34.121 00:07:34.121 00:07:34.121 Commands Supported and Effects 00:07:34.121 ============================== 00:07:34.121 Admin Commands 00:07:34.121 -------------- 00:07:34.121 Delete I/O Submission Queue (00h): Supported 00:07:34.121 Create I/O Submission Queue (01h): Supported 00:07:34.121 Get Log Page (02h): Supported 00:07:34.121 Delete I/O Completion Queue (04h): Supported 00:07:34.121 Create I/O Completion Queue (05h): Supported 00:07:34.121 Identify (06h): Supported 00:07:34.121 Abort (08h): Supported 00:07:34.121 Set Features (09h): Supported 00:07:34.121 Get Features (0Ah): Supported 00:07:34.121 Asynchronous Event Request (0Ch): Supported 00:07:34.121 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:34.121 Directive Send (19h): Supported 00:07:34.121 Directive Receive (1Ah): Supported 00:07:34.121 Virtualization Management (1Ch): Supported 00:07:34.121 Doorbell Buffer Config (7Ch): Supported 00:07:34.121 Format NVM (80h): Supported LBA-Change 00:07:34.121 I/O Commands 00:07:34.121 ------------ 00:07:34.121 Flush (00h): Supported LBA-Change 00:07:34.121 Write (01h): Supported LBA-Change 00:07:34.121 Read (02h): Supported 00:07:34.121 Compare (05h): Supported 00:07:34.121 Write Zeroes (08h): Supported LBA-Change 00:07:34.121 Dataset Management (09h): Supported LBA-Change 00:07:34.121 Unknown (0Ch): Supported 00:07:34.121 Unknown (12h): Supported 00:07:34.121 Copy (19h): Supported LBA-Change 00:07:34.121 Unknown (1Dh): Supported LBA-Change 00:07:34.121 00:07:34.121 Error Log 00:07:34.121 ========= 00:07:34.121 00:07:34.121 Arbitration 00:07:34.121 =========== 00:07:34.121 Arbitration Burst: no limit 00:07:34.121 00:07:34.121 Power Management 00:07:34.121 ================ 00:07:34.121 Number of Power States: 1 00:07:34.121 Current Power State: Power State #0 00:07:34.121 Power State #0: 00:07:34.121 Max Power: 25.00 W 00:07:34.121 Non-Operational State: Operational 00:07:34.121 Entry Latency: 16 microseconds 00:07:34.121 Exit Latency: 4 microseconds 00:07:34.121 Relative Read Throughput: 0 00:07:34.121 Relative Read Latency: 0 00:07:34.121 Relative Write Throughput: 0 00:07:34.121 Relative Write Latency: 0 00:07:34.121 Idle Power: Not Reported 00:07:34.121 Active Power: Not Reported 00:07:34.121 Non-Operational Permissive Mode: Not Supported 00:07:34.121 00:07:34.121 Health Information 00:07:34.121 ================== 00:07:34.121 Critical Warnings: 00:07:34.121 Available Spare Space: OK 00:07:34.121 Temperature: OK 00:07:34.121 Device Reliability: OK 00:07:34.121 Read Only: No 00:07:34.121 Volatile Memory Backup: OK 00:07:34.121 Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.121 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:34.121 Available Spare: 0% 00:07:34.121 Available Spare Threshold: 0% 00:07:34.121 Life Percentage Used: 0% 00:07:34.121 Data Units Read: 2040 00:07:34.121 Data Units Written: 1827 00:07:34.121 Host Read Commands: 101798 00:07:34.121 Host Write Commands: 100067 00:07:34.121 Controller Busy Time: 0 minutes 00:07:34.121 Power Cycles: 0 00:07:34.121 Power On Hours: 0 hours 00:07:34.121 Unsafe Shutdowns: 0 00:07:34.121 Unrecoverable Media Errors: 0 00:07:34.121 Lifetime Error Log Entries: 0 00:07:34.121 Warning Temperature Time: 0 minutes 00:07:34.121 Critical Temperature Time: 0 minutes 00:07:34.121 00:07:34.121 Number of Queues 00:07:34.121 ================ 00:07:34.121 Number of I/O Submission Queues: 64 00:07:34.121 Number of I/O Completion Queues: 64 00:07:34.121 00:07:34.121 ZNS Specific Controller Data 00:07:34.121 ============================ 00:07:34.121 Zone Append Size Limit: 0 00:07:34.121 00:07:34.121 00:07:34.121 Active Namespaces 00:07:34.121 ================= 00:07:34.121 Namespace ID:1 00:07:34.121 Error Recovery Timeout: Unlimited 00:07:34.121 Command Set Identifier: NVM (00h) 00:07:34.121 Deallocate: Supported 00:07:34.121 Deallocated/Unwritten Error: Supported 00:07:34.121 Deallocated Read Value: All 0x00 00:07:34.121 Deallocate in Write Zeroes: Not Supported 00:07:34.121 Deallocated Guard Field: 0xFFFF 00:07:34.121 Flush: Supported 00:07:34.121 Reservation: Not Supported 00:07:34.121 Namespace Sharing Capabilities: Private 00:07:34.121 Size (in LBAs): 1048576 (4GiB) 00:07:34.121 Capacity (in LBAs): 1048576 (4GiB) 00:07:34.121 Utilization (in LBAs): 1048576 (4GiB) 00:07:34.121 Thin Provisioning: Not Supported 00:07:34.121 Per-NS Atomic Units: No 00:07:34.121 Maximum Single Source Range Length: 128 00:07:34.121 Maximum Copy Length: 128 00:07:34.121 Maximum Source Range Count: 128 00:07:34.121 NGUID/EUI64 Never Reused: No 00:07:34.121 Namespace Write Protected: No 00:07:34.121 Number of LBA Formats: 8 00:07:34.121 Current LBA Format: LBA Format #04 00:07:34.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.121 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.121 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.121 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.121 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.121 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.121 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.121 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.121 00:07:34.121 NVM Specific Namespace Data 00:07:34.121 =========================== 00:07:34.121 Logical Block Storage Tag Mask: 0 00:07:34.121 Protection Information Capabilities: 00:07:34.121 16b Guard Protection Information Storage Tag Support: No 00:07:34.121 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.121 Storage Tag Check Read Support: No 00:07:34.121 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.121 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.121 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.121 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Namespace ID:2 00:07:34.122 Error Recovery Timeout: Unlimited 00:07:34.122 Command Set Identifier: NVM (00h) 00:07:34.122 Deallocate: Supported 00:07:34.122 Deallocated/Unwritten Error: Supported 00:07:34.122 Deallocated Read Value: All 0x00 00:07:34.122 Deallocate in Write Zeroes: Not Supported 00:07:34.122 Deallocated Guard Field: 0xFFFF 00:07:34.122 Flush: Supported 00:07:34.122 Reservation: Not Supported 00:07:34.122 Namespace Sharing Capabilities: Private 00:07:34.122 Size (in LBAs): 1048576 (4GiB) 00:07:34.122 Capacity (in LBAs): 1048576 (4GiB) 00:07:34.122 Utilization (in LBAs): 1048576 (4GiB) 00:07:34.122 Thin Provisioning: Not Supported 00:07:34.122 Per-NS Atomic Units: No 00:07:34.122 Maximum Single Source Range Length: 128 00:07:34.122 Maximum Copy Length: 128 00:07:34.122 Maximum Source Range Count: 128 00:07:34.122 NGUID/EUI64 Never Reused: No 00:07:34.122 Namespace Write Protected: No 00:07:34.122 Number of LBA Formats: 8 00:07:34.122 Current LBA Format: LBA Format #04 00:07:34.122 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.122 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.122 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.122 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.122 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.122 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.122 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.122 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.122 00:07:34.122 NVM Specific Namespace Data 00:07:34.122 =========================== 00:07:34.122 Logical Block Storage Tag Mask: 0 00:07:34.122 Protection Information Capabilities: 00:07:34.122 16b Guard Protection Information Storage Tag Support: No 00:07:34.122 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.122 Storage Tag Check Read Support: No 00:07:34.122 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Namespace ID:3 00:07:34.122 Error Recovery Timeout: Unlimited 00:07:34.122 Command Set Identifier: NVM (00h) 00:07:34.122 Deallocate: Supported 00:07:34.122 Deallocated/Unwritten Error: Supported 00:07:34.122 Deallocated Read Value: All 0x00 00:07:34.122 Deallocate in Write Zeroes: Not Supported 00:07:34.122 Deallocated Guard Field: 0xFFFF 00:07:34.122 Flush: Supported 00:07:34.122 Reservation: Not Supported 00:07:34.122 Namespace Sharing Capabilities: Private 00:07:34.122 Size (in LBAs): 1048576 (4GiB) 00:07:34.122 Capacity (in LBAs): 1048576 (4GiB) 00:07:34.122 Utilization (in LBAs): 1048576 (4GiB) 00:07:34.122 Thin Provisioning: Not Supported 00:07:34.122 Per-NS Atomic Units: No 00:07:34.122 Maximum Single Source Range Length: 128 00:07:34.122 Maximum Copy Length: 128 00:07:34.122 Maximum Source Range Count: 128 00:07:34.122 NGUID/EUI64 Never Reused: No 00:07:34.122 Namespace Write Protected: No 00:07:34.122 Number of LBA Formats: 8 00:07:34.122 Current LBA Format: LBA Format #04 00:07:34.122 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.122 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.122 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.122 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.122 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.122 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.122 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.122 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.122 00:07:34.122 NVM Specific Namespace Data 00:07:34.122 =========================== 00:07:34.122 Logical Block Storage Tag Mask: 0 00:07:34.122 Protection Information Capabilities: 00:07:34.122 16b Guard Protection Information Storage Tag Support: No 00:07:34.122 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.122 Storage Tag Check Read Support: No 00:07:34.122 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.122 10:28:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:34.122 10:28:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:34.382 ===================================================== 00:07:34.382 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:34.382 ===================================================== 00:07:34.382 Controller Capabilities/Features 00:07:34.382 ================================ 00:07:34.382 Vendor ID: 1b36 00:07:34.382 Subsystem Vendor ID: 1af4 00:07:34.382 Serial Number: 12340 00:07:34.382 Model Number: QEMU NVMe Ctrl 00:07:34.382 Firmware Version: 8.0.0 00:07:34.382 Recommended Arb Burst: 6 00:07:34.382 IEEE OUI Identifier: 00 54 52 00:07:34.382 Multi-path I/O 00:07:34.382 May have multiple subsystem ports: No 00:07:34.382 May have multiple controllers: No 00:07:34.382 Associated with SR-IOV VF: No 00:07:34.382 Max Data Transfer Size: 524288 00:07:34.382 Max Number of Namespaces: 256 00:07:34.382 Max Number of I/O Queues: 64 00:07:34.382 NVMe Specification Version (VS): 1.4 00:07:34.382 NVMe Specification Version (Identify): 1.4 00:07:34.382 Maximum Queue Entries: 2048 00:07:34.382 Contiguous Queues Required: Yes 00:07:34.382 Arbitration Mechanisms Supported 00:07:34.382 Weighted Round Robin: Not Supported 00:07:34.382 Vendor Specific: Not Supported 00:07:34.382 Reset Timeout: 7500 ms 00:07:34.382 Doorbell Stride: 4 bytes 00:07:34.382 NVM Subsystem Reset: Not Supported 00:07:34.382 Command Sets Supported 00:07:34.382 NVM Command Set: Supported 00:07:34.382 Boot Partition: Not Supported 00:07:34.382 Memory Page Size Minimum: 4096 bytes 00:07:34.382 Memory Page Size Maximum: 65536 bytes 00:07:34.382 Persistent Memory Region: Not Supported 00:07:34.382 Optional Asynchronous Events Supported 00:07:34.382 Namespace Attribute Notices: Supported 00:07:34.382 Firmware Activation Notices: Not Supported 00:07:34.382 ANA Change Notices: Not Supported 00:07:34.382 PLE Aggregate Log Change Notices: Not Supported 00:07:34.382 LBA Status Info Alert Notices: Not Supported 00:07:34.382 EGE Aggregate Log Change Notices: Not Supported 00:07:34.382 Normal NVM Subsystem Shutdown event: Not Supported 00:07:34.382 Zone Descriptor Change Notices: Not Supported 00:07:34.382 Discovery Log Change Notices: Not Supported 00:07:34.382 Controller Attributes 00:07:34.382 128-bit Host Identifier: Not Supported 00:07:34.382 Non-Operational Permissive Mode: Not Supported 00:07:34.382 NVM Sets: Not Supported 00:07:34.382 Read Recovery Levels: Not Supported 00:07:34.382 Endurance Groups: Not Supported 00:07:34.382 Predictable Latency Mode: Not Supported 00:07:34.382 Traffic Based Keep ALive: Not Supported 00:07:34.382 Namespace Granularity: Not Supported 00:07:34.382 SQ Associations: Not Supported 00:07:34.382 UUID List: Not Supported 00:07:34.382 Multi-Domain Subsystem: Not Supported 00:07:34.382 Fixed Capacity Management: Not Supported 00:07:34.382 Variable Capacity Management: Not Supported 00:07:34.382 Delete Endurance Group: Not Supported 00:07:34.382 Delete NVM Set: Not Supported 00:07:34.382 Extended LBA Formats Supported: Supported 00:07:34.382 Flexible Data Placement Supported: Not Supported 00:07:34.382 00:07:34.382 Controller Memory Buffer Support 00:07:34.382 ================================ 00:07:34.382 Supported: No 00:07:34.382 00:07:34.382 Persistent Memory Region Support 00:07:34.382 ================================ 00:07:34.382 Supported: No 00:07:34.382 00:07:34.382 Admin Command Set Attributes 00:07:34.382 ============================ 00:07:34.382 Security Send/Receive: Not Supported 00:07:34.382 Format NVM: Supported 00:07:34.382 Firmware Activate/Download: Not Supported 00:07:34.382 Namespace Management: Supported 00:07:34.382 Device Self-Test: Not Supported 00:07:34.382 Directives: Supported 00:07:34.382 NVMe-MI: Not Supported 00:07:34.382 Virtualization Management: Not Supported 00:07:34.382 Doorbell Buffer Config: Supported 00:07:34.382 Get LBA Status Capability: Not Supported 00:07:34.382 Command & Feature Lockdown Capability: Not Supported 00:07:34.382 Abort Command Limit: 4 00:07:34.382 Async Event Request Limit: 4 00:07:34.382 Number of Firmware Slots: N/A 00:07:34.382 Firmware Slot 1 Read-Only: N/A 00:07:34.382 Firmware Activation Without Reset: N/A 00:07:34.382 Multiple Update Detection Support: N/A 00:07:34.382 Firmware Update Granularity: No Information Provided 00:07:34.382 Per-Namespace SMART Log: Yes 00:07:34.382 Asymmetric Namespace Access Log Page: Not Supported 00:07:34.382 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:34.382 Command Effects Log Page: Supported 00:07:34.382 Get Log Page Extended Data: Supported 00:07:34.382 Telemetry Log Pages: Not Supported 00:07:34.382 Persistent Event Log Pages: Not Supported 00:07:34.382 Supported Log Pages Log Page: May Support 00:07:34.382 Commands Supported & Effects Log Page: Not Supported 00:07:34.382 Feature Identifiers & Effects Log Page:May Support 00:07:34.382 NVMe-MI Commands & Effects Log Page: May Support 00:07:34.382 Data Area 4 for Telemetry Log: Not Supported 00:07:34.382 Error Log Page Entries Supported: 1 00:07:34.382 Keep Alive: Not Supported 00:07:34.382 00:07:34.382 NVM Command Set Attributes 00:07:34.382 ========================== 00:07:34.382 Submission Queue Entry Size 00:07:34.382 Max: 64 00:07:34.382 Min: 64 00:07:34.382 Completion Queue Entry Size 00:07:34.382 Max: 16 00:07:34.382 Min: 16 00:07:34.382 Number of Namespaces: 256 00:07:34.382 Compare Command: Supported 00:07:34.382 Write Uncorrectable Command: Not Supported 00:07:34.382 Dataset Management Command: Supported 00:07:34.382 Write Zeroes Command: Supported 00:07:34.382 Set Features Save Field: Supported 00:07:34.382 Reservations: Not Supported 00:07:34.382 Timestamp: Supported 00:07:34.382 Copy: Supported 00:07:34.382 Volatile Write Cache: Present 00:07:34.382 Atomic Write Unit (Normal): 1 00:07:34.382 Atomic Write Unit (PFail): 1 00:07:34.382 Atomic Compare & Write Unit: 1 00:07:34.382 Fused Compare & Write: Not Supported 00:07:34.382 Scatter-Gather List 00:07:34.382 SGL Command Set: Supported 00:07:34.382 SGL Keyed: Not Supported 00:07:34.383 SGL Bit Bucket Descriptor: Not Supported 00:07:34.383 SGL Metadata Pointer: Not Supported 00:07:34.383 Oversized SGL: Not Supported 00:07:34.383 SGL Metadata Address: Not Supported 00:07:34.383 SGL Offset: Not Supported 00:07:34.383 Transport SGL Data Block: Not Supported 00:07:34.383 Replay Protected Memory Block: Not Supported 00:07:34.383 00:07:34.383 Firmware Slot Information 00:07:34.383 ========================= 00:07:34.383 Active slot: 1 00:07:34.383 Slot 1 Firmware Revision: 1.0 00:07:34.383 00:07:34.383 00:07:34.383 Commands Supported and Effects 00:07:34.383 ============================== 00:07:34.383 Admin Commands 00:07:34.383 -------------- 00:07:34.383 Delete I/O Submission Queue (00h): Supported 00:07:34.383 Create I/O Submission Queue (01h): Supported 00:07:34.383 Get Log Page (02h): Supported 00:07:34.383 Delete I/O Completion Queue (04h): Supported 00:07:34.383 Create I/O Completion Queue (05h): Supported 00:07:34.383 Identify (06h): Supported 00:07:34.383 Abort (08h): Supported 00:07:34.383 Set Features (09h): Supported 00:07:34.383 Get Features (0Ah): Supported 00:07:34.383 Asynchronous Event Request (0Ch): Supported 00:07:34.383 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:34.383 Directive Send (19h): Supported 00:07:34.383 Directive Receive (1Ah): Supported 00:07:34.383 Virtualization Management (1Ch): Supported 00:07:34.383 Doorbell Buffer Config (7Ch): Supported 00:07:34.383 Format NVM (80h): Supported LBA-Change 00:07:34.383 I/O Commands 00:07:34.383 ------------ 00:07:34.383 Flush (00h): Supported LBA-Change 00:07:34.383 Write (01h): Supported LBA-Change 00:07:34.383 Read (02h): Supported 00:07:34.383 Compare (05h): Supported 00:07:34.383 Write Zeroes (08h): Supported LBA-Change 00:07:34.383 Dataset Management (09h): Supported LBA-Change 00:07:34.383 Unknown (0Ch): Supported 00:07:34.383 Unknown (12h): Supported 00:07:34.383 Copy (19h): Supported LBA-Change 00:07:34.383 Unknown (1Dh): Supported LBA-Change 00:07:34.383 00:07:34.383 Error Log 00:07:34.383 ========= 00:07:34.383 00:07:34.383 Arbitration 00:07:34.383 =========== 00:07:34.383 Arbitration Burst: no limit 00:07:34.383 00:07:34.383 Power Management 00:07:34.383 ================ 00:07:34.383 Number of Power States: 1 00:07:34.383 Current Power State: Power State #0 00:07:34.383 Power State #0: 00:07:34.383 Max Power: 25.00 W 00:07:34.383 Non-Operational State: Operational 00:07:34.383 Entry Latency: 16 microseconds 00:07:34.383 Exit Latency: 4 microseconds 00:07:34.383 Relative Read Throughput: 0 00:07:34.383 Relative Read Latency: 0 00:07:34.383 Relative Write Throughput: 0 00:07:34.383 Relative Write Latency: 0 00:07:34.383 Idle Power: Not Reported 00:07:34.383 Active Power: Not Reported 00:07:34.383 Non-Operational Permissive Mode: Not Supported 00:07:34.383 00:07:34.383 Health Information 00:07:34.383 ================== 00:07:34.383 Critical Warnings: 00:07:34.383 Available Spare Space: OK 00:07:34.383 Temperature: OK 00:07:34.383 Device Reliability: OK 00:07:34.383 Read Only: No 00:07:34.383 Volatile Memory Backup: OK 00:07:34.383 Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.383 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:34.383 Available Spare: 0% 00:07:34.383 Available Spare Threshold: 0% 00:07:34.383 Life Percentage Used: 0% 00:07:34.383 Data Units Read: 642 00:07:34.383 Data Units Written: 570 00:07:34.383 Host Read Commands: 33356 00:07:34.383 Host Write Commands: 33142 00:07:34.383 Controller Busy Time: 0 minutes 00:07:34.383 Power Cycles: 0 00:07:34.383 Power On Hours: 0 hours 00:07:34.383 Unsafe Shutdowns: 0 00:07:34.383 Unrecoverable Media Errors: 0 00:07:34.383 Lifetime Error Log Entries: 0 00:07:34.383 Warning Temperature Time: 0 minutes 00:07:34.383 Critical Temperature Time: 0 minutes 00:07:34.383 00:07:34.383 Number of Queues 00:07:34.383 ================ 00:07:34.383 Number of I/O Submission Queues: 64 00:07:34.383 Number of I/O Completion Queues: 64 00:07:34.383 00:07:34.383 ZNS Specific Controller Data 00:07:34.383 ============================ 00:07:34.383 Zone Append Size Limit: 0 00:07:34.383 00:07:34.383 00:07:34.383 Active Namespaces 00:07:34.383 ================= 00:07:34.383 Namespace ID:1 00:07:34.383 Error Recovery Timeout: Unlimited 00:07:34.383 Command Set Identifier: NVM (00h) 00:07:34.383 Deallocate: Supported 00:07:34.383 Deallocated/Unwritten Error: Supported 00:07:34.383 Deallocated Read Value: All 0x00 00:07:34.383 Deallocate in Write Zeroes: Not Supported 00:07:34.383 Deallocated Guard Field: 0xFFFF 00:07:34.383 Flush: Supported 00:07:34.383 Reservation: Not Supported 00:07:34.383 Metadata Transferred as: Separate Metadata Buffer 00:07:34.383 Namespace Sharing Capabilities: Private 00:07:34.383 Size (in LBAs): 1548666 (5GiB) 00:07:34.383 Capacity (in LBAs): 1548666 (5GiB) 00:07:34.383 Utilization (in LBAs): 1548666 (5GiB) 00:07:34.383 Thin Provisioning: Not Supported 00:07:34.383 Per-NS Atomic Units: No 00:07:34.383 Maximum Single Source Range Length: 128 00:07:34.383 Maximum Copy Length: 128 00:07:34.383 Maximum Source Range Count: 128 00:07:34.383 NGUID/EUI64 Never Reused: No 00:07:34.383 Namespace Write Protected: No 00:07:34.383 Number of LBA Formats: 8 00:07:34.383 Current LBA Format: LBA Format #07 00:07:34.383 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.383 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.383 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.383 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.383 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.383 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.383 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.383 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.383 00:07:34.383 NVM Specific Namespace Data 00:07:34.383 =========================== 00:07:34.383 Logical Block Storage Tag Mask: 0 00:07:34.383 Protection Information Capabilities: 00:07:34.383 16b Guard Protection Information Storage Tag Support: No 00:07:34.383 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.383 Storage Tag Check Read Support: No 00:07:34.383 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.383 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.383 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.383 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.383 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.383 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.383 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.383 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.383 10:28:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:34.383 10:28:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:34.383 ===================================================== 00:07:34.383 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:34.383 ===================================================== 00:07:34.383 Controller Capabilities/Features 00:07:34.383 ================================ 00:07:34.383 Vendor ID: 1b36 00:07:34.383 Subsystem Vendor ID: 1af4 00:07:34.383 Serial Number: 12341 00:07:34.383 Model Number: QEMU NVMe Ctrl 00:07:34.383 Firmware Version: 8.0.0 00:07:34.383 Recommended Arb Burst: 6 00:07:34.383 IEEE OUI Identifier: 00 54 52 00:07:34.383 Multi-path I/O 00:07:34.383 May have multiple subsystem ports: No 00:07:34.383 May have multiple controllers: No 00:07:34.383 Associated with SR-IOV VF: No 00:07:34.383 Max Data Transfer Size: 524288 00:07:34.383 Max Number of Namespaces: 256 00:07:34.383 Max Number of I/O Queues: 64 00:07:34.383 NVMe Specification Version (VS): 1.4 00:07:34.383 NVMe Specification Version (Identify): 1.4 00:07:34.383 Maximum Queue Entries: 2048 00:07:34.383 Contiguous Queues Required: Yes 00:07:34.383 Arbitration Mechanisms Supported 00:07:34.383 Weighted Round Robin: Not Supported 00:07:34.383 Vendor Specific: Not Supported 00:07:34.383 Reset Timeout: 7500 ms 00:07:34.383 Doorbell Stride: 4 bytes 00:07:34.383 NVM Subsystem Reset: Not Supported 00:07:34.383 Command Sets Supported 00:07:34.383 NVM Command Set: Supported 00:07:34.383 Boot Partition: Not Supported 00:07:34.383 Memory Page Size Minimum: 4096 bytes 00:07:34.383 Memory Page Size Maximum: 65536 bytes 00:07:34.383 Persistent Memory Region: Not Supported 00:07:34.383 Optional Asynchronous Events Supported 00:07:34.383 Namespace Attribute Notices: Supported 00:07:34.383 Firmware Activation Notices: Not Supported 00:07:34.383 ANA Change Notices: Not Supported 00:07:34.383 PLE Aggregate Log Change Notices: Not Supported 00:07:34.383 LBA Status Info Alert Notices: Not Supported 00:07:34.384 EGE Aggregate Log Change Notices: Not Supported 00:07:34.384 Normal NVM Subsystem Shutdown event: Not Supported 00:07:34.384 Zone Descriptor Change Notices: Not Supported 00:07:34.384 Discovery Log Change Notices: Not Supported 00:07:34.384 Controller Attributes 00:07:34.384 128-bit Host Identifier: Not Supported 00:07:34.384 Non-Operational Permissive Mode: Not Supported 00:07:34.384 NVM Sets: Not Supported 00:07:34.384 Read Recovery Levels: Not Supported 00:07:34.384 Endurance Groups: Not Supported 00:07:34.384 Predictable Latency Mode: Not Supported 00:07:34.384 Traffic Based Keep ALive: Not Supported 00:07:34.384 Namespace Granularity: Not Supported 00:07:34.384 SQ Associations: Not Supported 00:07:34.384 UUID List: Not Supported 00:07:34.384 Multi-Domain Subsystem: Not Supported 00:07:34.384 Fixed Capacity Management: Not Supported 00:07:34.384 Variable Capacity Management: Not Supported 00:07:34.384 Delete Endurance Group: Not Supported 00:07:34.384 Delete NVM Set: Not Supported 00:07:34.384 Extended LBA Formats Supported: Supported 00:07:34.384 Flexible Data Placement Supported: Not Supported 00:07:34.384 00:07:34.384 Controller Memory Buffer Support 00:07:34.384 ================================ 00:07:34.384 Supported: No 00:07:34.384 00:07:34.384 Persistent Memory Region Support 00:07:34.384 ================================ 00:07:34.384 Supported: No 00:07:34.384 00:07:34.384 Admin Command Set Attributes 00:07:34.384 ============================ 00:07:34.384 Security Send/Receive: Not Supported 00:07:34.384 Format NVM: Supported 00:07:34.384 Firmware Activate/Download: Not Supported 00:07:34.384 Namespace Management: Supported 00:07:34.384 Device Self-Test: Not Supported 00:07:34.384 Directives: Supported 00:07:34.384 NVMe-MI: Not Supported 00:07:34.384 Virtualization Management: Not Supported 00:07:34.384 Doorbell Buffer Config: Supported 00:07:34.384 Get LBA Status Capability: Not Supported 00:07:34.384 Command & Feature Lockdown Capability: Not Supported 00:07:34.384 Abort Command Limit: 4 00:07:34.384 Async Event Request Limit: 4 00:07:34.384 Number of Firmware Slots: N/A 00:07:34.384 Firmware Slot 1 Read-Only: N/A 00:07:34.384 Firmware Activation Without Reset: N/A 00:07:34.384 Multiple Update Detection Support: N/A 00:07:34.384 Firmware Update Granularity: No Information Provided 00:07:34.384 Per-Namespace SMART Log: Yes 00:07:34.384 Asymmetric Namespace Access Log Page: Not Supported 00:07:34.384 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:34.384 Command Effects Log Page: Supported 00:07:34.384 Get Log Page Extended Data: Supported 00:07:34.384 Telemetry Log Pages: Not Supported 00:07:34.384 Persistent Event Log Pages: Not Supported 00:07:34.384 Supported Log Pages Log Page: May Support 00:07:34.384 Commands Supported & Effects Log Page: Not Supported 00:07:34.384 Feature Identifiers & Effects Log Page:May Support 00:07:34.384 NVMe-MI Commands & Effects Log Page: May Support 00:07:34.384 Data Area 4 for Telemetry Log: Not Supported 00:07:34.384 Error Log Page Entries Supported: 1 00:07:34.384 Keep Alive: Not Supported 00:07:34.384 00:07:34.384 NVM Command Set Attributes 00:07:34.384 ========================== 00:07:34.384 Submission Queue Entry Size 00:07:34.384 Max: 64 00:07:34.384 Min: 64 00:07:34.384 Completion Queue Entry Size 00:07:34.384 Max: 16 00:07:34.384 Min: 16 00:07:34.384 Number of Namespaces: 256 00:07:34.384 Compare Command: Supported 00:07:34.384 Write Uncorrectable Command: Not Supported 00:07:34.384 Dataset Management Command: Supported 00:07:34.384 Write Zeroes Command: Supported 00:07:34.384 Set Features Save Field: Supported 00:07:34.384 Reservations: Not Supported 00:07:34.384 Timestamp: Supported 00:07:34.384 Copy: Supported 00:07:34.384 Volatile Write Cache: Present 00:07:34.384 Atomic Write Unit (Normal): 1 00:07:34.384 Atomic Write Unit (PFail): 1 00:07:34.384 Atomic Compare & Write Unit: 1 00:07:34.384 Fused Compare & Write: Not Supported 00:07:34.384 Scatter-Gather List 00:07:34.384 SGL Command Set: Supported 00:07:34.384 SGL Keyed: Not Supported 00:07:34.384 SGL Bit Bucket Descriptor: Not Supported 00:07:34.384 SGL Metadata Pointer: Not Supported 00:07:34.384 Oversized SGL: Not Supported 00:07:34.384 SGL Metadata Address: Not Supported 00:07:34.384 SGL Offset: Not Supported 00:07:34.384 Transport SGL Data Block: Not Supported 00:07:34.384 Replay Protected Memory Block: Not Supported 00:07:34.384 00:07:34.384 Firmware Slot Information 00:07:34.384 ========================= 00:07:34.384 Active slot: 1 00:07:34.384 Slot 1 Firmware Revision: 1.0 00:07:34.384 00:07:34.384 00:07:34.384 Commands Supported and Effects 00:07:34.384 ============================== 00:07:34.384 Admin Commands 00:07:34.384 -------------- 00:07:34.384 Delete I/O Submission Queue (00h): Supported 00:07:34.384 Create I/O Submission Queue (01h): Supported 00:07:34.384 Get Log Page (02h): Supported 00:07:34.384 Delete I/O Completion Queue (04h): Supported 00:07:34.384 Create I/O Completion Queue (05h): Supported 00:07:34.384 Identify (06h): Supported 00:07:34.384 Abort (08h): Supported 00:07:34.384 Set Features (09h): Supported 00:07:34.384 Get Features (0Ah): Supported 00:07:34.384 Asynchronous Event Request (0Ch): Supported 00:07:34.384 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:34.384 Directive Send (19h): Supported 00:07:34.384 Directive Receive (1Ah): Supported 00:07:34.384 Virtualization Management (1Ch): Supported 00:07:34.384 Doorbell Buffer Config (7Ch): Supported 00:07:34.384 Format NVM (80h): Supported LBA-Change 00:07:34.384 I/O Commands 00:07:34.384 ------------ 00:07:34.384 Flush (00h): Supported LBA-Change 00:07:34.384 Write (01h): Supported LBA-Change 00:07:34.384 Read (02h): Supported 00:07:34.384 Compare (05h): Supported 00:07:34.384 Write Zeroes (08h): Supported LBA-Change 00:07:34.384 Dataset Management (09h): Supported LBA-Change 00:07:34.384 Unknown (0Ch): Supported 00:07:34.384 Unknown (12h): Supported 00:07:34.384 Copy (19h): Supported LBA-Change 00:07:34.384 Unknown (1Dh): Supported LBA-Change 00:07:34.384 00:07:34.384 Error Log 00:07:34.384 ========= 00:07:34.384 00:07:34.384 Arbitration 00:07:34.384 =========== 00:07:34.384 Arbitration Burst: no limit 00:07:34.384 00:07:34.384 Power Management 00:07:34.384 ================ 00:07:34.384 Number of Power States: 1 00:07:34.384 Current Power State: Power State #0 00:07:34.384 Power State #0: 00:07:34.384 Max Power: 25.00 W 00:07:34.384 Non-Operational State: Operational 00:07:34.384 Entry Latency: 16 microseconds 00:07:34.384 Exit Latency: 4 microseconds 00:07:34.384 Relative Read Throughput: 0 00:07:34.384 Relative Read Latency: 0 00:07:34.384 Relative Write Throughput: 0 00:07:34.384 Relative Write Latency: 0 00:07:34.645 Idle Power: Not Reported 00:07:34.645 Active Power: Not Reported 00:07:34.645 Non-Operational Permissive Mode: Not Supported 00:07:34.645 00:07:34.645 Health Information 00:07:34.645 ================== 00:07:34.645 Critical Warnings: 00:07:34.645 Available Spare Space: OK 00:07:34.645 Temperature: OK 00:07:34.645 Device Reliability: OK 00:07:34.645 Read Only: No 00:07:34.645 Volatile Memory Backup: OK 00:07:34.645 Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.645 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:34.645 Available Spare: 0% 00:07:34.645 Available Spare Threshold: 0% 00:07:34.645 Life Percentage Used: 0% 00:07:34.645 Data Units Read: 933 00:07:34.645 Data Units Written: 807 00:07:34.645 Host Read Commands: 48819 00:07:34.645 Host Write Commands: 47713 00:07:34.645 Controller Busy Time: 0 minutes 00:07:34.645 Power Cycles: 0 00:07:34.645 Power On Hours: 0 hours 00:07:34.645 Unsafe Shutdowns: 0 00:07:34.645 Unrecoverable Media Errors: 0 00:07:34.645 Lifetime Error Log Entries: 0 00:07:34.645 Warning Temperature Time: 0 minutes 00:07:34.645 Critical Temperature Time: 0 minutes 00:07:34.645 00:07:34.645 Number of Queues 00:07:34.645 ================ 00:07:34.645 Number of I/O Submission Queues: 64 00:07:34.645 Number of I/O Completion Queues: 64 00:07:34.645 00:07:34.645 ZNS Specific Controller Data 00:07:34.645 ============================ 00:07:34.645 Zone Append Size Limit: 0 00:07:34.645 00:07:34.645 00:07:34.645 Active Namespaces 00:07:34.645 ================= 00:07:34.645 Namespace ID:1 00:07:34.645 Error Recovery Timeout: Unlimited 00:07:34.645 Command Set Identifier: NVM (00h) 00:07:34.645 Deallocate: Supported 00:07:34.645 Deallocated/Unwritten Error: Supported 00:07:34.645 Deallocated Read Value: All 0x00 00:07:34.645 Deallocate in Write Zeroes: Not Supported 00:07:34.646 Deallocated Guard Field: 0xFFFF 00:07:34.646 Flush: Supported 00:07:34.646 Reservation: Not Supported 00:07:34.646 Namespace Sharing Capabilities: Private 00:07:34.646 Size (in LBAs): 1310720 (5GiB) 00:07:34.646 Capacity (in LBAs): 1310720 (5GiB) 00:07:34.646 Utilization (in LBAs): 1310720 (5GiB) 00:07:34.646 Thin Provisioning: Not Supported 00:07:34.646 Per-NS Atomic Units: No 00:07:34.646 Maximum Single Source Range Length: 128 00:07:34.646 Maximum Copy Length: 128 00:07:34.646 Maximum Source Range Count: 128 00:07:34.646 NGUID/EUI64 Never Reused: No 00:07:34.646 Namespace Write Protected: No 00:07:34.646 Number of LBA Formats: 8 00:07:34.646 Current LBA Format: LBA Format #04 00:07:34.646 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.646 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.646 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.646 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.646 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.646 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.646 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.646 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.646 00:07:34.646 NVM Specific Namespace Data 00:07:34.646 =========================== 00:07:34.646 Logical Block Storage Tag Mask: 0 00:07:34.646 Protection Information Capabilities: 00:07:34.646 16b Guard Protection Information Storage Tag Support: No 00:07:34.646 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.646 Storage Tag Check Read Support: No 00:07:34.646 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.646 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.646 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.646 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.646 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.646 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.646 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.646 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.646 10:28:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:34.646 10:28:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:34.646 ===================================================== 00:07:34.646 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:34.646 ===================================================== 00:07:34.646 Controller Capabilities/Features 00:07:34.646 ================================ 00:07:34.646 Vendor ID: 1b36 00:07:34.646 Subsystem Vendor ID: 1af4 00:07:34.646 Serial Number: 12342 00:07:34.646 Model Number: QEMU NVMe Ctrl 00:07:34.646 Firmware Version: 8.0.0 00:07:34.646 Recommended Arb Burst: 6 00:07:34.646 IEEE OUI Identifier: 00 54 52 00:07:34.646 Multi-path I/O 00:07:34.646 May have multiple subsystem ports: No 00:07:34.646 May have multiple controllers: No 00:07:34.646 Associated with SR-IOV VF: No 00:07:34.646 Max Data Transfer Size: 524288 00:07:34.646 Max Number of Namespaces: 256 00:07:34.646 Max Number of I/O Queues: 64 00:07:34.646 NVMe Specification Version (VS): 1.4 00:07:34.646 NVMe Specification Version (Identify): 1.4 00:07:34.646 Maximum Queue Entries: 2048 00:07:34.646 Contiguous Queues Required: Yes 00:07:34.646 Arbitration Mechanisms Supported 00:07:34.646 Weighted Round Robin: Not Supported 00:07:34.646 Vendor Specific: Not Supported 00:07:34.646 Reset Timeout: 7500 ms 00:07:34.646 Doorbell Stride: 4 bytes 00:07:34.646 NVM Subsystem Reset: Not Supported 00:07:34.646 Command Sets Supported 00:07:34.646 NVM Command Set: Supported 00:07:34.646 Boot Partition: Not Supported 00:07:34.646 Memory Page Size Minimum: 4096 bytes 00:07:34.646 Memory Page Size Maximum: 65536 bytes 00:07:34.646 Persistent Memory Region: Not Supported 00:07:34.646 Optional Asynchronous Events Supported 00:07:34.646 Namespace Attribute Notices: Supported 00:07:34.646 Firmware Activation Notices: Not Supported 00:07:34.646 ANA Change Notices: Not Supported 00:07:34.646 PLE Aggregate Log Change Notices: Not Supported 00:07:34.646 LBA Status Info Alert Notices: Not Supported 00:07:34.646 EGE Aggregate Log Change Notices: Not Supported 00:07:34.646 Normal NVM Subsystem Shutdown event: Not Supported 00:07:34.646 Zone Descriptor Change Notices: Not Supported 00:07:34.646 Discovery Log Change Notices: Not Supported 00:07:34.646 Controller Attributes 00:07:34.646 128-bit Host Identifier: Not Supported 00:07:34.646 Non-Operational Permissive Mode: Not Supported 00:07:34.646 NVM Sets: Not Supported 00:07:34.646 Read Recovery Levels: Not Supported 00:07:34.646 Endurance Groups: Not Supported 00:07:34.646 Predictable Latency Mode: Not Supported 00:07:34.646 Traffic Based Keep ALive: Not Supported 00:07:34.646 Namespace Granularity: Not Supported 00:07:34.646 SQ Associations: Not Supported 00:07:34.646 UUID List: Not Supported 00:07:34.646 Multi-Domain Subsystem: Not Supported 00:07:34.646 Fixed Capacity Management: Not Supported 00:07:34.646 Variable Capacity Management: Not Supported 00:07:34.646 Delete Endurance Group: Not Supported 00:07:34.646 Delete NVM Set: Not Supported 00:07:34.646 Extended LBA Formats Supported: Supported 00:07:34.646 Flexible Data Placement Supported: Not Supported 00:07:34.646 00:07:34.646 Controller Memory Buffer Support 00:07:34.646 ================================ 00:07:34.646 Supported: No 00:07:34.646 00:07:34.646 Persistent Memory Region Support 00:07:34.646 ================================ 00:07:34.646 Supported: No 00:07:34.646 00:07:34.646 Admin Command Set Attributes 00:07:34.646 ============================ 00:07:34.646 Security Send/Receive: Not Supported 00:07:34.646 Format NVM: Supported 00:07:34.646 Firmware Activate/Download: Not Supported 00:07:34.646 Namespace Management: Supported 00:07:34.646 Device Self-Test: Not Supported 00:07:34.646 Directives: Supported 00:07:34.646 NVMe-MI: Not Supported 00:07:34.646 Virtualization Management: Not Supported 00:07:34.646 Doorbell Buffer Config: Supported 00:07:34.646 Get LBA Status Capability: Not Supported 00:07:34.646 Command & Feature Lockdown Capability: Not Supported 00:07:34.646 Abort Command Limit: 4 00:07:34.646 Async Event Request Limit: 4 00:07:34.646 Number of Firmware Slots: N/A 00:07:34.646 Firmware Slot 1 Read-Only: N/A 00:07:34.646 Firmware Activation Without Reset: N/A 00:07:34.646 Multiple Update Detection Support: N/A 00:07:34.646 Firmware Update Granularity: No Information Provided 00:07:34.646 Per-Namespace SMART Log: Yes 00:07:34.646 Asymmetric Namespace Access Log Page: Not Supported 00:07:34.646 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:34.646 Command Effects Log Page: Supported 00:07:34.646 Get Log Page Extended Data: Supported 00:07:34.646 Telemetry Log Pages: Not Supported 00:07:34.646 Persistent Event Log Pages: Not Supported 00:07:34.646 Supported Log Pages Log Page: May Support 00:07:34.646 Commands Supported & Effects Log Page: Not Supported 00:07:34.646 Feature Identifiers & Effects Log Page:May Support 00:07:34.646 NVMe-MI Commands & Effects Log Page: May Support 00:07:34.646 Data Area 4 for Telemetry Log: Not Supported 00:07:34.646 Error Log Page Entries Supported: 1 00:07:34.646 Keep Alive: Not Supported 00:07:34.646 00:07:34.646 NVM Command Set Attributes 00:07:34.646 ========================== 00:07:34.646 Submission Queue Entry Size 00:07:34.646 Max: 64 00:07:34.646 Min: 64 00:07:34.646 Completion Queue Entry Size 00:07:34.646 Max: 16 00:07:34.646 Min: 16 00:07:34.646 Number of Namespaces: 256 00:07:34.646 Compare Command: Supported 00:07:34.646 Write Uncorrectable Command: Not Supported 00:07:34.646 Dataset Management Command: Supported 00:07:34.646 Write Zeroes Command: Supported 00:07:34.646 Set Features Save Field: Supported 00:07:34.646 Reservations: Not Supported 00:07:34.646 Timestamp: Supported 00:07:34.646 Copy: Supported 00:07:34.646 Volatile Write Cache: Present 00:07:34.646 Atomic Write Unit (Normal): 1 00:07:34.646 Atomic Write Unit (PFail): 1 00:07:34.646 Atomic Compare & Write Unit: 1 00:07:34.646 Fused Compare & Write: Not Supported 00:07:34.646 Scatter-Gather List 00:07:34.646 SGL Command Set: Supported 00:07:34.646 SGL Keyed: Not Supported 00:07:34.646 SGL Bit Bucket Descriptor: Not Supported 00:07:34.646 SGL Metadata Pointer: Not Supported 00:07:34.646 Oversized SGL: Not Supported 00:07:34.646 SGL Metadata Address: Not Supported 00:07:34.646 SGL Offset: Not Supported 00:07:34.646 Transport SGL Data Block: Not Supported 00:07:34.646 Replay Protected Memory Block: Not Supported 00:07:34.646 00:07:34.646 Firmware Slot Information 00:07:34.646 ========================= 00:07:34.646 Active slot: 1 00:07:34.646 Slot 1 Firmware Revision: 1.0 00:07:34.646 00:07:34.646 00:07:34.646 Commands Supported and Effects 00:07:34.647 ============================== 00:07:34.647 Admin Commands 00:07:34.647 -------------- 00:07:34.647 Delete I/O Submission Queue (00h): Supported 00:07:34.647 Create I/O Submission Queue (01h): Supported 00:07:34.647 Get Log Page (02h): Supported 00:07:34.647 Delete I/O Completion Queue (04h): Supported 00:07:34.647 Create I/O Completion Queue (05h): Supported 00:07:34.647 Identify (06h): Supported 00:07:34.647 Abort (08h): Supported 00:07:34.647 Set Features (09h): Supported 00:07:34.647 Get Features (0Ah): Supported 00:07:34.647 Asynchronous Event Request (0Ch): Supported 00:07:34.647 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:34.647 Directive Send (19h): Supported 00:07:34.647 Directive Receive (1Ah): Supported 00:07:34.647 Virtualization Management (1Ch): Supported 00:07:34.647 Doorbell Buffer Config (7Ch): Supported 00:07:34.647 Format NVM (80h): Supported LBA-Change 00:07:34.647 I/O Commands 00:07:34.647 ------------ 00:07:34.647 Flush (00h): Supported LBA-Change 00:07:34.647 Write (01h): Supported LBA-Change 00:07:34.647 Read (02h): Supported 00:07:34.647 Compare (05h): Supported 00:07:34.647 Write Zeroes (08h): Supported LBA-Change 00:07:34.647 Dataset Management (09h): Supported LBA-Change 00:07:34.647 Unknown (0Ch): Supported 00:07:34.647 Unknown (12h): Supported 00:07:34.647 Copy (19h): Supported LBA-Change 00:07:34.647 Unknown (1Dh): Supported LBA-Change 00:07:34.647 00:07:34.647 Error Log 00:07:34.647 ========= 00:07:34.647 00:07:34.647 Arbitration 00:07:34.647 =========== 00:07:34.647 Arbitration Burst: no limit 00:07:34.647 00:07:34.647 Power Management 00:07:34.647 ================ 00:07:34.647 Number of Power States: 1 00:07:34.647 Current Power State: Power State #0 00:07:34.647 Power State #0: 00:07:34.647 Max Power: 25.00 W 00:07:34.647 Non-Operational State: Operational 00:07:34.647 Entry Latency: 16 microseconds 00:07:34.647 Exit Latency: 4 microseconds 00:07:34.647 Relative Read Throughput: 0 00:07:34.647 Relative Read Latency: 0 00:07:34.647 Relative Write Throughput: 0 00:07:34.647 Relative Write Latency: 0 00:07:34.647 Idle Power: Not Reported 00:07:34.647 Active Power: Not Reported 00:07:34.647 Non-Operational Permissive Mode: Not Supported 00:07:34.647 00:07:34.647 Health Information 00:07:34.647 ================== 00:07:34.647 Critical Warnings: 00:07:34.647 Available Spare Space: OK 00:07:34.647 Temperature: OK 00:07:34.647 Device Reliability: OK 00:07:34.647 Read Only: No 00:07:34.647 Volatile Memory Backup: OK 00:07:34.647 Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.647 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:34.647 Available Spare: 0% 00:07:34.647 Available Spare Threshold: 0% 00:07:34.647 Life Percentage Used: 0% 00:07:34.647 Data Units Read: 2040 00:07:34.647 Data Units Written: 1827 00:07:34.647 Host Read Commands: 101798 00:07:34.647 Host Write Commands: 100067 00:07:34.647 Controller Busy Time: 0 minutes 00:07:34.647 Power Cycles: 0 00:07:34.647 Power On Hours: 0 hours 00:07:34.647 Unsafe Shutdowns: 0 00:07:34.647 Unrecoverable Media Errors: 0 00:07:34.647 Lifetime Error Log Entries: 0 00:07:34.647 Warning Temperature Time: 0 minutes 00:07:34.647 Critical Temperature Time: 0 minutes 00:07:34.647 00:07:34.647 Number of Queues 00:07:34.647 ================ 00:07:34.647 Number of I/O Submission Queues: 64 00:07:34.647 Number of I/O Completion Queues: 64 00:07:34.647 00:07:34.647 ZNS Specific Controller Data 00:07:34.647 ============================ 00:07:34.647 Zone Append Size Limit: 0 00:07:34.647 00:07:34.647 00:07:34.647 Active Namespaces 00:07:34.647 ================= 00:07:34.647 Namespace ID:1 00:07:34.647 Error Recovery Timeout: Unlimited 00:07:34.647 Command Set Identifier: NVM (00h) 00:07:34.647 Deallocate: Supported 00:07:34.647 Deallocated/Unwritten Error: Supported 00:07:34.647 Deallocated Read Value: All 0x00 00:07:34.647 Deallocate in Write Zeroes: Not Supported 00:07:34.647 Deallocated Guard Field: 0xFFFF 00:07:34.647 Flush: Supported 00:07:34.647 Reservation: Not Supported 00:07:34.647 Namespace Sharing Capabilities: Private 00:07:34.647 Size (in LBAs): 1048576 (4GiB) 00:07:34.647 Capacity (in LBAs): 1048576 (4GiB) 00:07:34.647 Utilization (in LBAs): 1048576 (4GiB) 00:07:34.647 Thin Provisioning: Not Supported 00:07:34.647 Per-NS Atomic Units: No 00:07:34.647 Maximum Single Source Range Length: 128 00:07:34.647 Maximum Copy Length: 128 00:07:34.647 Maximum Source Range Count: 128 00:07:34.647 NGUID/EUI64 Never Reused: No 00:07:34.647 Namespace Write Protected: No 00:07:34.647 Number of LBA Formats: 8 00:07:34.647 Current LBA Format: LBA Format #04 00:07:34.647 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.647 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.647 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.647 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.647 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.647 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.647 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.647 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.647 00:07:34.647 NVM Specific Namespace Data 00:07:34.647 =========================== 00:07:34.647 Logical Block Storage Tag Mask: 0 00:07:34.647 Protection Information Capabilities: 00:07:34.647 16b Guard Protection Information Storage Tag Support: No 00:07:34.647 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.647 Storage Tag Check Read Support: No 00:07:34.647 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Namespace ID:2 00:07:34.647 Error Recovery Timeout: Unlimited 00:07:34.647 Command Set Identifier: NVM (00h) 00:07:34.647 Deallocate: Supported 00:07:34.647 Deallocated/Unwritten Error: Supported 00:07:34.647 Deallocated Read Value: All 0x00 00:07:34.647 Deallocate in Write Zeroes: Not Supported 00:07:34.647 Deallocated Guard Field: 0xFFFF 00:07:34.647 Flush: Supported 00:07:34.647 Reservation: Not Supported 00:07:34.647 Namespace Sharing Capabilities: Private 00:07:34.647 Size (in LBAs): 1048576 (4GiB) 00:07:34.647 Capacity (in LBAs): 1048576 (4GiB) 00:07:34.647 Utilization (in LBAs): 1048576 (4GiB) 00:07:34.647 Thin Provisioning: Not Supported 00:07:34.647 Per-NS Atomic Units: No 00:07:34.647 Maximum Single Source Range Length: 128 00:07:34.647 Maximum Copy Length: 128 00:07:34.647 Maximum Source Range Count: 128 00:07:34.647 NGUID/EUI64 Never Reused: No 00:07:34.647 Namespace Write Protected: No 00:07:34.647 Number of LBA Formats: 8 00:07:34.647 Current LBA Format: LBA Format #04 00:07:34.647 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.647 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.647 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.647 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.647 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.647 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.647 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.647 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.647 00:07:34.647 NVM Specific Namespace Data 00:07:34.647 =========================== 00:07:34.647 Logical Block Storage Tag Mask: 0 00:07:34.647 Protection Information Capabilities: 00:07:34.647 16b Guard Protection Information Storage Tag Support: No 00:07:34.647 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.647 Storage Tag Check Read Support: No 00:07:34.647 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.647 Namespace ID:3 00:07:34.647 Error Recovery Timeout: Unlimited 00:07:34.647 Command Set Identifier: NVM (00h) 00:07:34.647 Deallocate: Supported 00:07:34.647 Deallocated/Unwritten Error: Supported 00:07:34.647 Deallocated Read Value: All 0x00 00:07:34.647 Deallocate in Write Zeroes: Not Supported 00:07:34.647 Deallocated Guard Field: 0xFFFF 00:07:34.648 Flush: Supported 00:07:34.648 Reservation: Not Supported 00:07:34.648 Namespace Sharing Capabilities: Private 00:07:34.648 Size (in LBAs): 1048576 (4GiB) 00:07:34.648 Capacity (in LBAs): 1048576 (4GiB) 00:07:34.648 Utilization (in LBAs): 1048576 (4GiB) 00:07:34.648 Thin Provisioning: Not Supported 00:07:34.648 Per-NS Atomic Units: No 00:07:34.648 Maximum Single Source Range Length: 128 00:07:34.648 Maximum Copy Length: 128 00:07:34.648 Maximum Source Range Count: 128 00:07:34.648 NGUID/EUI64 Never Reused: No 00:07:34.648 Namespace Write Protected: No 00:07:34.648 Number of LBA Formats: 8 00:07:34.648 Current LBA Format: LBA Format #04 00:07:34.648 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.648 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.648 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.648 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.648 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.648 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.648 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.648 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.648 00:07:34.648 NVM Specific Namespace Data 00:07:34.648 =========================== 00:07:34.648 Logical Block Storage Tag Mask: 0 00:07:34.648 Protection Information Capabilities: 00:07:34.648 16b Guard Protection Information Storage Tag Support: No 00:07:34.648 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.648 Storage Tag Check Read Support: No 00:07:34.648 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.648 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.648 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.648 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.648 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.648 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.648 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.648 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.648 10:28:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:34.648 10:28:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:34.909 ===================================================== 00:07:34.909 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:34.909 ===================================================== 00:07:34.909 Controller Capabilities/Features 00:07:34.909 ================================ 00:07:34.909 Vendor ID: 1b36 00:07:34.909 Subsystem Vendor ID: 1af4 00:07:34.909 Serial Number: 12343 00:07:34.909 Model Number: QEMU NVMe Ctrl 00:07:34.909 Firmware Version: 8.0.0 00:07:34.909 Recommended Arb Burst: 6 00:07:34.909 IEEE OUI Identifier: 00 54 52 00:07:34.909 Multi-path I/O 00:07:34.909 May have multiple subsystem ports: No 00:07:34.909 May have multiple controllers: Yes 00:07:34.909 Associated with SR-IOV VF: No 00:07:34.909 Max Data Transfer Size: 524288 00:07:34.909 Max Number of Namespaces: 256 00:07:34.909 Max Number of I/O Queues: 64 00:07:34.909 NVMe Specification Version (VS): 1.4 00:07:34.909 NVMe Specification Version (Identify): 1.4 00:07:34.909 Maximum Queue Entries: 2048 00:07:34.909 Contiguous Queues Required: Yes 00:07:34.909 Arbitration Mechanisms Supported 00:07:34.909 Weighted Round Robin: Not Supported 00:07:34.909 Vendor Specific: Not Supported 00:07:34.909 Reset Timeout: 7500 ms 00:07:34.909 Doorbell Stride: 4 bytes 00:07:34.909 NVM Subsystem Reset: Not Supported 00:07:34.909 Command Sets Supported 00:07:34.909 NVM Command Set: Supported 00:07:34.909 Boot Partition: Not Supported 00:07:34.909 Memory Page Size Minimum: 4096 bytes 00:07:34.909 Memory Page Size Maximum: 65536 bytes 00:07:34.909 Persistent Memory Region: Not Supported 00:07:34.909 Optional Asynchronous Events Supported 00:07:34.909 Namespace Attribute Notices: Supported 00:07:34.909 Firmware Activation Notices: Not Supported 00:07:34.909 ANA Change Notices: Not Supported 00:07:34.909 PLE Aggregate Log Change Notices: Not Supported 00:07:34.909 LBA Status Info Alert Notices: Not Supported 00:07:34.909 EGE Aggregate Log Change Notices: Not Supported 00:07:34.909 Normal NVM Subsystem Shutdown event: Not Supported 00:07:34.909 Zone Descriptor Change Notices: Not Supported 00:07:34.909 Discovery Log Change Notices: Not Supported 00:07:34.909 Controller Attributes 00:07:34.909 128-bit Host Identifier: Not Supported 00:07:34.909 Non-Operational Permissive Mode: Not Supported 00:07:34.909 NVM Sets: Not Supported 00:07:34.909 Read Recovery Levels: Not Supported 00:07:34.909 Endurance Groups: Supported 00:07:34.909 Predictable Latency Mode: Not Supported 00:07:34.909 Traffic Based Keep ALive: Not Supported 00:07:34.909 Namespace Granularity: Not Supported 00:07:34.909 SQ Associations: Not Supported 00:07:34.909 UUID List: Not Supported 00:07:34.909 Multi-Domain Subsystem: Not Supported 00:07:34.909 Fixed Capacity Management: Not Supported 00:07:34.909 Variable Capacity Management: Not Supported 00:07:34.909 Delete Endurance Group: Not Supported 00:07:34.909 Delete NVM Set: Not Supported 00:07:34.909 Extended LBA Formats Supported: Supported 00:07:34.909 Flexible Data Placement Supported: Supported 00:07:34.909 00:07:34.909 Controller Memory Buffer Support 00:07:34.909 ================================ 00:07:34.909 Supported: No 00:07:34.909 00:07:34.909 Persistent Memory Region Support 00:07:34.909 ================================ 00:07:34.909 Supported: No 00:07:34.909 00:07:34.910 Admin Command Set Attributes 00:07:34.910 ============================ 00:07:34.910 Security Send/Receive: Not Supported 00:07:34.910 Format NVM: Supported 00:07:34.910 Firmware Activate/Download: Not Supported 00:07:34.910 Namespace Management: Supported 00:07:34.910 Device Self-Test: Not Supported 00:07:34.910 Directives: Supported 00:07:34.910 NVMe-MI: Not Supported 00:07:34.910 Virtualization Management: Not Supported 00:07:34.910 Doorbell Buffer Config: Supported 00:07:34.910 Get LBA Status Capability: Not Supported 00:07:34.910 Command & Feature Lockdown Capability: Not Supported 00:07:34.910 Abort Command Limit: 4 00:07:34.910 Async Event Request Limit: 4 00:07:34.910 Number of Firmware Slots: N/A 00:07:34.910 Firmware Slot 1 Read-Only: N/A 00:07:34.910 Firmware Activation Without Reset: N/A 00:07:34.910 Multiple Update Detection Support: N/A 00:07:34.910 Firmware Update Granularity: No Information Provided 00:07:34.910 Per-Namespace SMART Log: Yes 00:07:34.910 Asymmetric Namespace Access Log Page: Not Supported 00:07:34.910 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:34.910 Command Effects Log Page: Supported 00:07:34.910 Get Log Page Extended Data: Supported 00:07:34.910 Telemetry Log Pages: Not Supported 00:07:34.910 Persistent Event Log Pages: Not Supported 00:07:34.910 Supported Log Pages Log Page: May Support 00:07:34.910 Commands Supported & Effects Log Page: Not Supported 00:07:34.910 Feature Identifiers & Effects Log Page:May Support 00:07:34.910 NVMe-MI Commands & Effects Log Page: May Support 00:07:34.910 Data Area 4 for Telemetry Log: Not Supported 00:07:34.910 Error Log Page Entries Supported: 1 00:07:34.910 Keep Alive: Not Supported 00:07:34.910 00:07:34.910 NVM Command Set Attributes 00:07:34.910 ========================== 00:07:34.910 Submission Queue Entry Size 00:07:34.910 Max: 64 00:07:34.910 Min: 64 00:07:34.910 Completion Queue Entry Size 00:07:34.910 Max: 16 00:07:34.910 Min: 16 00:07:34.910 Number of Namespaces: 256 00:07:34.910 Compare Command: Supported 00:07:34.910 Write Uncorrectable Command: Not Supported 00:07:34.910 Dataset Management Command: Supported 00:07:34.910 Write Zeroes Command: Supported 00:07:34.910 Set Features Save Field: Supported 00:07:34.910 Reservations: Not Supported 00:07:34.910 Timestamp: Supported 00:07:34.910 Copy: Supported 00:07:34.910 Volatile Write Cache: Present 00:07:34.910 Atomic Write Unit (Normal): 1 00:07:34.910 Atomic Write Unit (PFail): 1 00:07:34.910 Atomic Compare & Write Unit: 1 00:07:34.910 Fused Compare & Write: Not Supported 00:07:34.910 Scatter-Gather List 00:07:34.910 SGL Command Set: Supported 00:07:34.910 SGL Keyed: Not Supported 00:07:34.910 SGL Bit Bucket Descriptor: Not Supported 00:07:34.910 SGL Metadata Pointer: Not Supported 00:07:34.910 Oversized SGL: Not Supported 00:07:34.910 SGL Metadata Address: Not Supported 00:07:34.910 SGL Offset: Not Supported 00:07:34.910 Transport SGL Data Block: Not Supported 00:07:34.910 Replay Protected Memory Block: Not Supported 00:07:34.910 00:07:34.910 Firmware Slot Information 00:07:34.910 ========================= 00:07:34.910 Active slot: 1 00:07:34.910 Slot 1 Firmware Revision: 1.0 00:07:34.910 00:07:34.910 00:07:34.910 Commands Supported and Effects 00:07:34.910 ============================== 00:07:34.910 Admin Commands 00:07:34.910 -------------- 00:07:34.910 Delete I/O Submission Queue (00h): Supported 00:07:34.910 Create I/O Submission Queue (01h): Supported 00:07:34.910 Get Log Page (02h): Supported 00:07:34.910 Delete I/O Completion Queue (04h): Supported 00:07:34.910 Create I/O Completion Queue (05h): Supported 00:07:34.910 Identify (06h): Supported 00:07:34.910 Abort (08h): Supported 00:07:34.910 Set Features (09h): Supported 00:07:34.910 Get Features (0Ah): Supported 00:07:34.910 Asynchronous Event Request (0Ch): Supported 00:07:34.910 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:34.910 Directive Send (19h): Supported 00:07:34.910 Directive Receive (1Ah): Supported 00:07:34.910 Virtualization Management (1Ch): Supported 00:07:34.910 Doorbell Buffer Config (7Ch): Supported 00:07:34.910 Format NVM (80h): Supported LBA-Change 00:07:34.910 I/O Commands 00:07:34.910 ------------ 00:07:34.910 Flush (00h): Supported LBA-Change 00:07:34.910 Write (01h): Supported LBA-Change 00:07:34.910 Read (02h): Supported 00:07:34.910 Compare (05h): Supported 00:07:34.910 Write Zeroes (08h): Supported LBA-Change 00:07:34.910 Dataset Management (09h): Supported LBA-Change 00:07:34.910 Unknown (0Ch): Supported 00:07:34.910 Unknown (12h): Supported 00:07:34.910 Copy (19h): Supported LBA-Change 00:07:34.910 Unknown (1Dh): Supported LBA-Change 00:07:34.910 00:07:34.910 Error Log 00:07:34.910 ========= 00:07:34.910 00:07:34.910 Arbitration 00:07:34.910 =========== 00:07:34.910 Arbitration Burst: no limit 00:07:34.910 00:07:34.910 Power Management 00:07:34.910 ================ 00:07:34.910 Number of Power States: 1 00:07:34.910 Current Power State: Power State #0 00:07:34.910 Power State #0: 00:07:34.910 Max Power: 25.00 W 00:07:34.910 Non-Operational State: Operational 00:07:34.910 Entry Latency: 16 microseconds 00:07:34.910 Exit Latency: 4 microseconds 00:07:34.910 Relative Read Throughput: 0 00:07:34.910 Relative Read Latency: 0 00:07:34.910 Relative Write Throughput: 0 00:07:34.910 Relative Write Latency: 0 00:07:34.910 Idle Power: Not Reported 00:07:34.910 Active Power: Not Reported 00:07:34.910 Non-Operational Permissive Mode: Not Supported 00:07:34.910 00:07:34.910 Health Information 00:07:34.910 ================== 00:07:34.910 Critical Warnings: 00:07:34.910 Available Spare Space: OK 00:07:34.910 Temperature: OK 00:07:34.910 Device Reliability: OK 00:07:34.910 Read Only: No 00:07:34.910 Volatile Memory Backup: OK 00:07:34.910 Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.910 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:34.910 Available Spare: 0% 00:07:34.910 Available Spare Threshold: 0% 00:07:34.910 Life Percentage Used: 0% 00:07:34.910 Data Units Read: 786 00:07:34.910 Data Units Written: 715 00:07:34.910 Host Read Commands: 35018 00:07:34.910 Host Write Commands: 34441 00:07:34.910 Controller Busy Time: 0 minutes 00:07:34.910 Power Cycles: 0 00:07:34.910 Power On Hours: 0 hours 00:07:34.910 Unsafe Shutdowns: 0 00:07:34.910 Unrecoverable Media Errors: 0 00:07:34.910 Lifetime Error Log Entries: 0 00:07:34.910 Warning Temperature Time: 0 minutes 00:07:34.910 Critical Temperature Time: 0 minutes 00:07:34.910 00:07:34.910 Number of Queues 00:07:34.910 ================ 00:07:34.910 Number of I/O Submission Queues: 64 00:07:34.910 Number of I/O Completion Queues: 64 00:07:34.910 00:07:34.910 ZNS Specific Controller Data 00:07:34.910 ============================ 00:07:34.910 Zone Append Size Limit: 0 00:07:34.910 00:07:34.910 00:07:34.910 Active Namespaces 00:07:34.910 ================= 00:07:34.910 Namespace ID:1 00:07:34.910 Error Recovery Timeout: Unlimited 00:07:34.910 Command Set Identifier: NVM (00h) 00:07:34.910 Deallocate: Supported 00:07:34.910 Deallocated/Unwritten Error: Supported 00:07:34.910 Deallocated Read Value: All 0x00 00:07:34.910 Deallocate in Write Zeroes: Not Supported 00:07:34.910 Deallocated Guard Field: 0xFFFF 00:07:34.910 Flush: Supported 00:07:34.910 Reservation: Not Supported 00:07:34.910 Namespace Sharing Capabilities: Multiple Controllers 00:07:34.910 Size (in LBAs): 262144 (1GiB) 00:07:34.910 Capacity (in LBAs): 262144 (1GiB) 00:07:34.910 Utilization (in LBAs): 262144 (1GiB) 00:07:34.910 Thin Provisioning: Not Supported 00:07:34.910 Per-NS Atomic Units: No 00:07:34.910 Maximum Single Source Range Length: 128 00:07:34.910 Maximum Copy Length: 128 00:07:34.910 Maximum Source Range Count: 128 00:07:34.910 NGUID/EUI64 Never Reused: No 00:07:34.910 Namespace Write Protected: No 00:07:34.910 Endurance group ID: 1 00:07:34.910 Number of LBA Formats: 8 00:07:34.910 Current LBA Format: LBA Format #04 00:07:34.910 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:34.910 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:34.910 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:34.910 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:34.910 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:34.910 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:34.910 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:34.910 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:34.910 00:07:34.910 Get Feature FDP: 00:07:34.910 ================ 00:07:34.910 Enabled: Yes 00:07:34.910 FDP configuration index: 0 00:07:34.910 00:07:34.910 FDP configurations log page 00:07:34.910 =========================== 00:07:34.910 Number of FDP configurations: 1 00:07:34.910 Version: 0 00:07:34.910 Size: 112 00:07:34.910 FDP Configuration Descriptor: 0 00:07:34.910 Descriptor Size: 96 00:07:34.911 Reclaim Group Identifier format: 2 00:07:34.911 FDP Volatile Write Cache: Not Present 00:07:34.911 FDP Configuration: Valid 00:07:34.911 Vendor Specific Size: 0 00:07:34.911 Number of Reclaim Groups: 2 00:07:34.911 Number of Recalim Unit Handles: 8 00:07:34.911 Max Placement Identifiers: 128 00:07:34.911 Number of Namespaces Suppprted: 256 00:07:34.911 Reclaim unit Nominal Size: 6000000 bytes 00:07:34.911 Estimated Reclaim Unit Time Limit: Not Reported 00:07:34.911 RUH Desc #000: RUH Type: Initially Isolated 00:07:34.911 RUH Desc #001: RUH Type: Initially Isolated 00:07:34.911 RUH Desc #002: RUH Type: Initially Isolated 00:07:34.911 RUH Desc #003: RUH Type: Initially Isolated 00:07:34.911 RUH Desc #004: RUH Type: Initially Isolated 00:07:34.911 RUH Desc #005: RUH Type: Initially Isolated 00:07:34.911 RUH Desc #006: RUH Type: Initially Isolated 00:07:34.911 RUH Desc #007: RUH Type: Initially Isolated 00:07:34.911 00:07:34.911 FDP reclaim unit handle usage log page 00:07:34.911 ====================================== 00:07:34.911 Number of Reclaim Unit Handles: 8 00:07:34.911 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:34.911 RUH Usage Desc #001: RUH Attributes: Unused 00:07:34.911 RUH Usage Desc #002: RUH Attributes: Unused 00:07:34.911 RUH Usage Desc #003: RUH Attributes: Unused 00:07:34.911 RUH Usage Desc #004: RUH Attributes: Unused 00:07:34.911 RUH Usage Desc #005: RUH Attributes: Unused 00:07:34.911 RUH Usage Desc #006: RUH Attributes: Unused 00:07:34.911 RUH Usage Desc #007: RUH Attributes: Unused 00:07:34.911 00:07:34.911 FDP statistics log page 00:07:34.911 ======================= 00:07:34.911 Host bytes with metadata written: 457613312 00:07:34.911 Media bytes with metadata written: 457658368 00:07:34.911 Media bytes erased: 0 00:07:34.911 00:07:34.911 FDP events log page 00:07:34.911 =================== 00:07:34.911 Number of FDP events: 0 00:07:34.911 00:07:34.911 NVM Specific Namespace Data 00:07:34.911 =========================== 00:07:34.911 Logical Block Storage Tag Mask: 0 00:07:34.911 Protection Information Capabilities: 00:07:34.911 16b Guard Protection Information Storage Tag Support: No 00:07:34.911 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:34.911 Storage Tag Check Read Support: No 00:07:34.911 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.911 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.911 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.911 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.911 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.911 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.911 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.911 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:34.911 00:07:34.911 real 0m1.022s 00:07:34.911 user 0m0.332s 00:07:34.911 sys 0m0.473s 00:07:34.911 ************************************ 00:07:34.911 END TEST nvme_identify 00:07:34.911 ************************************ 00:07:34.911 10:28:09 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.911 10:28:09 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:34.911 10:28:09 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:34.911 10:28:09 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.911 10:28:09 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.911 10:28:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.911 ************************************ 00:07:34.911 START TEST nvme_perf 00:07:34.911 ************************************ 00:07:34.911 10:28:09 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:34.911 10:28:09 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:36.294 Initializing NVMe Controllers 00:07:36.294 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:36.294 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:36.294 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:36.294 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:36.294 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:36.294 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:36.294 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:36.294 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:36.294 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:36.294 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:36.294 Initialization complete. Launching workers. 00:07:36.294 ======================================================== 00:07:36.294 Latency(us) 00:07:36.294 Device Information : IOPS MiB/s Average min max 00:07:36.294 PCIE (0000:00:10.0) NSID 1 from core 0: 15222.74 178.39 8411.40 4708.82 31882.65 00:07:36.294 PCIE (0000:00:11.0) NSID 1 from core 0: 15222.74 178.39 8405.87 4605.41 31225.77 00:07:36.294 PCIE (0000:00:13.0) NSID 1 from core 0: 15222.74 178.39 8398.34 4011.50 30938.17 00:07:36.294 PCIE (0000:00:12.0) NSID 1 from core 0: 15222.74 178.39 8391.20 3783.82 30343.76 00:07:36.294 PCIE (0000:00:12.0) NSID 2 from core 0: 15222.74 178.39 8384.05 3617.60 29858.55 00:07:36.294 PCIE (0000:00:12.0) NSID 3 from core 0: 15286.71 179.14 8341.43 3331.46 24986.88 00:07:36.294 ======================================================== 00:07:36.294 Total : 91400.43 1071.10 8388.68 3331.46 31882.65 00:07:36.294 00:07:36.294 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:36.294 ================================================================================= 00:07:36.294 1.00000% : 6125.095us 00:07:36.294 10.00000% : 6351.951us 00:07:36.294 25.00000% : 6604.012us 00:07:36.294 50.00000% : 7007.311us 00:07:36.294 75.00000% : 10536.172us 00:07:36.294 90.00000% : 12300.603us 00:07:36.294 95.00000% : 13208.025us 00:07:36.294 98.00000% : 14216.271us 00:07:36.294 99.00000% : 16333.588us 00:07:36.294 99.50000% : 24500.382us 00:07:36.294 99.90000% : 31658.929us 00:07:36.294 99.99000% : 31860.578us 00:07:36.294 99.99900% : 32062.228us 00:07:36.294 99.99990% : 32062.228us 00:07:36.294 99.99999% : 32062.228us 00:07:36.294 00:07:36.294 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:36.294 ================================================================================= 00:07:36.294 1.00000% : 6175.508us 00:07:36.294 10.00000% : 6402.363us 00:07:36.294 25.00000% : 6604.012us 00:07:36.294 50.00000% : 6956.898us 00:07:36.294 75.00000% : 10536.172us 00:07:36.294 90.00000% : 12351.015us 00:07:36.294 95.00000% : 13208.025us 00:07:36.294 98.00000% : 14014.622us 00:07:36.294 99.00000% : 16434.412us 00:07:36.294 99.50000% : 24197.908us 00:07:36.294 99.90000% : 31053.982us 00:07:36.294 99.99000% : 31255.631us 00:07:36.294 99.99900% : 31255.631us 00:07:36.294 99.99990% : 31255.631us 00:07:36.294 99.99999% : 31255.631us 00:07:36.294 00:07:36.294 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:36.294 ================================================================================= 00:07:36.294 1.00000% : 6150.302us 00:07:36.295 10.00000% : 6402.363us 00:07:36.295 25.00000% : 6604.012us 00:07:36.295 50.00000% : 6956.898us 00:07:36.295 75.00000% : 10485.760us 00:07:36.295 90.00000% : 12401.428us 00:07:36.295 95.00000% : 13208.025us 00:07:36.295 98.00000% : 14115.446us 00:07:36.295 99.00000% : 15728.640us 00:07:36.295 99.50000% : 24601.206us 00:07:36.295 99.90000% : 30852.332us 00:07:36.295 99.99000% : 31053.982us 00:07:36.295 99.99900% : 31053.982us 00:07:36.295 99.99990% : 31053.982us 00:07:36.295 99.99999% : 31053.982us 00:07:36.295 00:07:36.295 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:36.295 ================================================================================= 00:07:36.295 1.00000% : 6099.889us 00:07:36.295 10.00000% : 6402.363us 00:07:36.295 25.00000% : 6604.012us 00:07:36.295 50.00000% : 6956.898us 00:07:36.295 75.00000% : 10536.172us 00:07:36.295 90.00000% : 12250.191us 00:07:36.295 95.00000% : 13208.025us 00:07:36.295 98.00000% : 14216.271us 00:07:36.295 99.00000% : 15728.640us 00:07:36.295 99.50000% : 24298.732us 00:07:36.295 99.90000% : 30247.385us 00:07:36.295 99.99000% : 30449.034us 00:07:36.295 99.99900% : 30449.034us 00:07:36.295 99.99990% : 30449.034us 00:07:36.295 99.99999% : 30449.034us 00:07:36.295 00:07:36.295 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:36.295 ================================================================================= 00:07:36.295 1.00000% : 6150.302us 00:07:36.295 10.00000% : 6402.363us 00:07:36.295 25.00000% : 6604.012us 00:07:36.295 50.00000% : 6956.898us 00:07:36.295 75.00000% : 10485.760us 00:07:36.295 90.00000% : 12250.191us 00:07:36.295 95.00000% : 13208.025us 00:07:36.295 98.00000% : 14317.095us 00:07:36.295 99.00000% : 15829.465us 00:07:36.295 99.50000% : 24298.732us 00:07:36.295 99.90000% : 29642.437us 00:07:36.295 99.99000% : 29844.086us 00:07:36.295 99.99900% : 30045.735us 00:07:36.295 99.99990% : 30045.735us 00:07:36.295 99.99999% : 30045.735us 00:07:36.295 00:07:36.295 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:36.295 ================================================================================= 00:07:36.295 1.00000% : 6150.302us 00:07:36.295 10.00000% : 6402.363us 00:07:36.295 25.00000% : 6604.012us 00:07:36.295 50.00000% : 6956.898us 00:07:36.295 75.00000% : 10536.172us 00:07:36.295 90.00000% : 12250.191us 00:07:36.295 95.00000% : 13208.025us 00:07:36.295 98.00000% : 14115.446us 00:07:36.295 99.00000% : 16131.938us 00:07:36.295 99.50000% : 18249.255us 00:07:36.295 99.90000% : 24802.855us 00:07:36.295 99.99000% : 25004.505us 00:07:36.295 99.99900% : 25004.505us 00:07:36.295 99.99990% : 25004.505us 00:07:36.295 99.99999% : 25004.505us 00:07:36.295 00:07:36.295 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:36.295 ============================================================================== 00:07:36.295 Range in us Cumulative IO count 00:07:36.295 4688.345 - 4713.551: 0.0066% ( 1) 00:07:36.295 4713.551 - 4738.757: 0.0263% ( 3) 00:07:36.295 4738.757 - 4763.963: 0.0394% ( 2) 00:07:36.295 4763.963 - 4789.169: 0.0525% ( 2) 00:07:36.295 4789.169 - 4814.375: 0.0657% ( 2) 00:07:36.295 4814.375 - 4839.582: 0.0853% ( 3) 00:07:36.295 4839.582 - 4864.788: 0.0985% ( 2) 00:07:36.295 4864.788 - 4889.994: 0.1116% ( 2) 00:07:36.295 4889.994 - 4915.200: 0.1313% ( 3) 00:07:36.295 4915.200 - 4940.406: 0.1379% ( 1) 00:07:36.295 4940.406 - 4965.612: 0.1576% ( 3) 00:07:36.295 4965.612 - 4990.818: 0.1773% ( 3) 00:07:36.295 4990.818 - 5016.025: 0.1838% ( 1) 00:07:36.295 5016.025 - 5041.231: 0.2101% ( 4) 00:07:36.295 5041.231 - 5066.437: 0.2166% ( 1) 00:07:36.295 5066.437 - 5091.643: 0.2298% ( 2) 00:07:36.295 5091.643 - 5116.849: 0.2429% ( 2) 00:07:36.295 5116.849 - 5142.055: 0.2560% ( 2) 00:07:36.295 5142.055 - 5167.262: 0.2757% ( 3) 00:07:36.295 5167.262 - 5192.468: 0.2954% ( 3) 00:07:36.295 5192.468 - 5217.674: 0.3020% ( 1) 00:07:36.295 5217.674 - 5242.880: 0.3217% ( 3) 00:07:36.295 5242.880 - 5268.086: 0.3348% ( 2) 00:07:36.295 5268.086 - 5293.292: 0.3480% ( 2) 00:07:36.295 5293.292 - 5318.498: 0.3676% ( 3) 00:07:36.295 5318.498 - 5343.705: 0.3808% ( 2) 00:07:36.295 5343.705 - 5368.911: 0.3939% ( 2) 00:07:36.295 5368.911 - 5394.117: 0.4136% ( 3) 00:07:36.295 5394.117 - 5419.323: 0.4202% ( 1) 00:07:36.295 5973.858 - 5999.065: 0.4727% ( 8) 00:07:36.295 5999.065 - 6024.271: 0.5318% ( 9) 00:07:36.295 6024.271 - 6049.477: 0.6303% ( 15) 00:07:36.295 6049.477 - 6074.683: 0.7550% ( 19) 00:07:36.295 6074.683 - 6099.889: 0.8666% ( 17) 00:07:36.295 6099.889 - 6125.095: 1.2342% ( 56) 00:07:36.295 6125.095 - 6150.302: 1.6938% ( 70) 00:07:36.295 6150.302 - 6175.508: 2.0811% ( 59) 00:07:36.295 6175.508 - 6200.714: 2.8755% ( 121) 00:07:36.295 6200.714 - 6225.920: 3.9653% ( 166) 00:07:36.295 6225.920 - 6251.126: 5.1996% ( 188) 00:07:36.295 6251.126 - 6276.332: 6.4995% ( 198) 00:07:36.295 6276.332 - 6301.538: 7.9569% ( 222) 00:07:36.295 6301.538 - 6326.745: 9.2371% ( 195) 00:07:36.295 6326.745 - 6351.951: 10.8587% ( 247) 00:07:36.295 6351.951 - 6377.157: 12.3556% ( 228) 00:07:36.295 6377.157 - 6402.363: 13.8918% ( 234) 00:07:36.295 6402.363 - 6427.569: 15.5462% ( 252) 00:07:36.295 6427.569 - 6452.775: 17.0365% ( 227) 00:07:36.295 6452.775 - 6503.188: 20.0827% ( 464) 00:07:36.295 6503.188 - 6553.600: 23.3325% ( 495) 00:07:36.295 6553.600 - 6604.012: 26.5822% ( 495) 00:07:36.295 6604.012 - 6654.425: 29.7991% ( 490) 00:07:36.295 6654.425 - 6704.837: 33.2064% ( 519) 00:07:36.295 6704.837 - 6755.249: 36.5284% ( 506) 00:07:36.295 6755.249 - 6805.662: 39.8306% ( 503) 00:07:36.295 6805.662 - 6856.074: 43.2117% ( 515) 00:07:36.295 6856.074 - 6906.486: 46.4351% ( 491) 00:07:36.295 6906.486 - 6956.898: 49.6980% ( 497) 00:07:36.295 6956.898 - 7007.311: 52.7967% ( 472) 00:07:36.295 7007.311 - 7057.723: 55.4884% ( 410) 00:07:36.295 7057.723 - 7108.135: 58.0160% ( 385) 00:07:36.295 7108.135 - 7158.548: 60.1103% ( 319) 00:07:36.295 7158.548 - 7208.960: 61.6859% ( 240) 00:07:36.295 7208.960 - 7259.372: 62.7692% ( 165) 00:07:36.295 7259.372 - 7309.785: 63.4848% ( 109) 00:07:36.295 7309.785 - 7360.197: 64.0100% ( 80) 00:07:36.295 7360.197 - 7410.609: 64.4236% ( 63) 00:07:36.295 7410.609 - 7461.022: 64.8700% ( 68) 00:07:36.295 7461.022 - 7511.434: 65.2705% ( 61) 00:07:36.295 7511.434 - 7561.846: 65.5462% ( 42) 00:07:36.295 7561.846 - 7612.258: 65.8285% ( 43) 00:07:36.295 7612.258 - 7662.671: 66.0714% ( 37) 00:07:36.295 7662.671 - 7713.083: 66.2487% ( 27) 00:07:36.295 7713.083 - 7763.495: 66.4259% ( 27) 00:07:36.295 7763.495 - 7813.908: 66.5638% ( 21) 00:07:36.295 7813.908 - 7864.320: 66.7148% ( 23) 00:07:36.295 7864.320 - 7914.732: 66.9052% ( 29) 00:07:36.295 7914.732 - 7965.145: 67.0628% ( 24) 00:07:36.295 7965.145 - 8015.557: 67.2794% ( 33) 00:07:36.295 8015.557 - 8065.969: 67.4304% ( 23) 00:07:36.295 8065.969 - 8116.382: 67.5683% ( 21) 00:07:36.295 8116.382 - 8166.794: 67.6996% ( 20) 00:07:36.295 8166.794 - 8217.206: 67.7652% ( 10) 00:07:36.295 8217.206 - 8267.618: 67.8965% ( 20) 00:07:36.295 8267.618 - 8318.031: 68.0278% ( 20) 00:07:36.295 8318.031 - 8368.443: 68.1788% ( 23) 00:07:36.295 8368.443 - 8418.855: 68.2904% ( 17) 00:07:36.295 8418.855 - 8469.268: 68.4283% ( 21) 00:07:36.295 8469.268 - 8519.680: 68.5530% ( 19) 00:07:36.295 8519.680 - 8570.092: 68.6778% ( 19) 00:07:36.295 8570.092 - 8620.505: 68.8157% ( 21) 00:07:36.295 8620.505 - 8670.917: 68.9535% ( 21) 00:07:36.295 8670.917 - 8721.329: 69.1308% ( 27) 00:07:36.295 8721.329 - 8771.742: 69.2752% ( 22) 00:07:36.295 8771.742 - 8822.154: 69.4196% ( 22) 00:07:36.295 8822.154 - 8872.566: 69.5509% ( 20) 00:07:36.295 8872.566 - 8922.978: 69.7085% ( 24) 00:07:36.295 8922.978 - 8973.391: 69.8398% ( 20) 00:07:36.295 8973.391 - 9023.803: 69.9908% ( 23) 00:07:36.295 9023.803 - 9074.215: 70.1287% ( 21) 00:07:36.295 9074.215 - 9124.628: 70.2534% ( 19) 00:07:36.295 9124.628 - 9175.040: 70.3782% ( 19) 00:07:36.295 9175.040 - 9225.452: 70.5423% ( 25) 00:07:36.295 9225.452 - 9275.865: 70.6736% ( 20) 00:07:36.295 9275.865 - 9326.277: 70.7983% ( 19) 00:07:36.295 9326.277 - 9376.689: 70.9428% ( 22) 00:07:36.295 9376.689 - 9427.102: 71.0544% ( 17) 00:07:36.295 9427.102 - 9477.514: 71.1725% ( 18) 00:07:36.295 9477.514 - 9527.926: 71.3038% ( 20) 00:07:36.295 9527.926 - 9578.338: 71.4483% ( 22) 00:07:36.295 9578.338 - 9628.751: 71.5402% ( 14) 00:07:36.295 9628.751 - 9679.163: 71.6518% ( 17) 00:07:36.295 9679.163 - 9729.575: 71.8028% ( 23) 00:07:36.295 9729.575 - 9779.988: 72.0326% ( 35) 00:07:36.295 9779.988 - 9830.400: 72.2098% ( 27) 00:07:36.295 9830.400 - 9880.812: 72.3608% ( 23) 00:07:36.295 9880.812 - 9931.225: 72.5053% ( 22) 00:07:36.295 9931.225 - 9981.637: 72.6891% ( 28) 00:07:36.295 9981.637 - 10032.049: 72.9057% ( 33) 00:07:36.295 10032.049 - 10082.462: 73.0502% ( 22) 00:07:36.295 10082.462 - 10132.874: 73.1749% ( 19) 00:07:36.295 10132.874 - 10183.286: 73.3390% ( 25) 00:07:36.295 10183.286 - 10233.698: 73.5557% ( 33) 00:07:36.295 10233.698 - 10284.111: 73.7658% ( 32) 00:07:36.295 10284.111 - 10334.523: 73.9955% ( 35) 00:07:36.295 10334.523 - 10384.935: 74.2581% ( 40) 00:07:36.295 10384.935 - 10435.348: 74.5536% ( 45) 00:07:36.295 10435.348 - 10485.760: 74.8818% ( 50) 00:07:36.295 10485.760 - 10536.172: 75.2495% ( 56) 00:07:36.295 10536.172 - 10586.585: 75.6696% ( 64) 00:07:36.295 10586.585 - 10636.997: 76.0701% ( 61) 00:07:36.295 10636.997 - 10687.409: 76.4837% ( 63) 00:07:36.296 10687.409 - 10737.822: 76.7791% ( 45) 00:07:36.296 10737.822 - 10788.234: 77.0943% ( 48) 00:07:36.296 10788.234 - 10838.646: 77.4094% ( 48) 00:07:36.296 10838.646 - 10889.058: 77.6983% ( 44) 00:07:36.296 10889.058 - 10939.471: 77.9477% ( 38) 00:07:36.296 10939.471 - 10989.883: 78.2694% ( 49) 00:07:36.296 10989.883 - 11040.295: 78.6305% ( 55) 00:07:36.296 11040.295 - 11090.708: 79.0572% ( 65) 00:07:36.296 11090.708 - 11141.120: 79.6678% ( 93) 00:07:36.296 11141.120 - 11191.532: 80.2061% ( 82) 00:07:36.296 11191.532 - 11241.945: 80.6854% ( 73) 00:07:36.296 11241.945 - 11292.357: 81.2434% ( 85) 00:07:36.296 11292.357 - 11342.769: 81.7358% ( 75) 00:07:36.296 11342.769 - 11393.182: 82.2413% ( 77) 00:07:36.296 11393.182 - 11443.594: 82.7928% ( 84) 00:07:36.296 11443.594 - 11494.006: 83.2721% ( 73) 00:07:36.296 11494.006 - 11544.418: 83.7382% ( 71) 00:07:36.296 11544.418 - 11594.831: 84.2371% ( 76) 00:07:36.296 11594.831 - 11645.243: 84.6113% ( 57) 00:07:36.296 11645.243 - 11695.655: 85.1431% ( 81) 00:07:36.296 11695.655 - 11746.068: 85.4845% ( 52) 00:07:36.296 11746.068 - 11796.480: 85.9178% ( 66) 00:07:36.296 11796.480 - 11846.892: 86.3117% ( 60) 00:07:36.296 11846.892 - 11897.305: 86.8172% ( 77) 00:07:36.296 11897.305 - 11947.717: 87.2899% ( 72) 00:07:36.296 11947.717 - 11998.129: 87.8283% ( 82) 00:07:36.296 11998.129 - 12048.542: 88.2616% ( 66) 00:07:36.296 12048.542 - 12098.954: 88.8065% ( 83) 00:07:36.296 12098.954 - 12149.366: 89.2004% ( 60) 00:07:36.296 12149.366 - 12199.778: 89.6074% ( 62) 00:07:36.296 12199.778 - 12250.191: 89.9094% ( 46) 00:07:36.296 12250.191 - 12300.603: 90.2967% ( 59) 00:07:36.296 12300.603 - 12351.015: 90.6578% ( 55) 00:07:36.296 12351.015 - 12401.428: 90.9992% ( 52) 00:07:36.296 12401.428 - 12451.840: 91.3209% ( 49) 00:07:36.296 12451.840 - 12502.252: 91.6623% ( 52) 00:07:36.296 12502.252 - 12552.665: 91.9446% ( 43) 00:07:36.296 12552.665 - 12603.077: 92.2728% ( 50) 00:07:36.296 12603.077 - 12653.489: 92.6142% ( 52) 00:07:36.296 12653.489 - 12703.902: 92.8834% ( 41) 00:07:36.296 12703.902 - 12754.314: 93.1591% ( 42) 00:07:36.296 12754.314 - 12804.726: 93.4349% ( 42) 00:07:36.296 12804.726 - 12855.138: 93.7434% ( 47) 00:07:36.296 12855.138 - 12905.551: 94.0651% ( 49) 00:07:36.296 12905.551 - 13006.375: 94.4919% ( 65) 00:07:36.296 13006.375 - 13107.200: 94.8858% ( 60) 00:07:36.296 13107.200 - 13208.025: 95.2140% ( 50) 00:07:36.296 13208.025 - 13308.849: 95.4701% ( 39) 00:07:36.296 13308.849 - 13409.674: 95.7786% ( 47) 00:07:36.296 13409.674 - 13510.498: 96.0741% ( 45) 00:07:36.296 13510.498 - 13611.323: 96.4614% ( 59) 00:07:36.296 13611.323 - 13712.148: 96.7568% ( 45) 00:07:36.296 13712.148 - 13812.972: 97.1048% ( 53) 00:07:36.296 13812.972 - 13913.797: 97.4133% ( 47) 00:07:36.296 13913.797 - 14014.622: 97.6825% ( 41) 00:07:36.296 14014.622 - 14115.446: 97.9386% ( 39) 00:07:36.296 14115.446 - 14216.271: 98.1158% ( 27) 00:07:36.296 14216.271 - 14317.095: 98.3259% ( 32) 00:07:36.296 14317.095 - 14417.920: 98.4900% ( 25) 00:07:36.296 14417.920 - 14518.745: 98.6148% ( 19) 00:07:36.296 14518.745 - 14619.569: 98.7001% ( 13) 00:07:36.296 14619.569 - 14720.394: 98.7264% ( 4) 00:07:36.296 14720.394 - 14821.218: 98.7395% ( 2) 00:07:36.296 15325.342 - 15426.166: 98.7658% ( 4) 00:07:36.296 15426.166 - 15526.991: 98.8314% ( 10) 00:07:36.296 15526.991 - 15627.815: 98.8511% ( 3) 00:07:36.296 15627.815 - 15728.640: 98.8774% ( 4) 00:07:36.296 15728.640 - 15829.465: 98.8905% ( 2) 00:07:36.296 15829.465 - 15930.289: 98.9168% ( 4) 00:07:36.296 15930.289 - 16031.114: 98.9364% ( 3) 00:07:36.296 16031.114 - 16131.938: 98.9627% ( 4) 00:07:36.296 16131.938 - 16232.763: 98.9824% ( 3) 00:07:36.296 16232.763 - 16333.588: 99.0087% ( 4) 00:07:36.296 16333.588 - 16434.412: 99.0349% ( 4) 00:07:36.296 16434.412 - 16535.237: 99.0546% ( 3) 00:07:36.296 16535.237 - 16636.062: 99.0809% ( 4) 00:07:36.296 16636.062 - 16736.886: 99.1137% ( 5) 00:07:36.296 16736.886 - 16837.711: 99.1268% ( 2) 00:07:36.296 16837.711 - 16938.535: 99.1597% ( 5) 00:07:36.296 23492.135 - 23592.960: 99.1662% ( 1) 00:07:36.296 23592.960 - 23693.785: 99.1991% ( 5) 00:07:36.296 23693.785 - 23794.609: 99.2384% ( 6) 00:07:36.296 23794.609 - 23895.434: 99.2778% ( 6) 00:07:36.296 23895.434 - 23996.258: 99.3172% ( 6) 00:07:36.296 23996.258 - 24097.083: 99.3566% ( 6) 00:07:36.296 24097.083 - 24197.908: 99.4026% ( 7) 00:07:36.296 24197.908 - 24298.732: 99.4288% ( 4) 00:07:36.296 24298.732 - 24399.557: 99.4814% ( 8) 00:07:36.296 24399.557 - 24500.382: 99.5207% ( 6) 00:07:36.296 24500.382 - 24601.206: 99.5536% ( 5) 00:07:36.296 24601.206 - 24702.031: 99.5798% ( 4) 00:07:36.296 30650.683 - 30852.332: 99.6061% ( 4) 00:07:36.296 30852.332 - 31053.982: 99.6980% ( 14) 00:07:36.296 31053.982 - 31255.631: 99.7571% ( 9) 00:07:36.296 31255.631 - 31457.280: 99.8359% ( 12) 00:07:36.296 31457.280 - 31658.929: 99.9147% ( 12) 00:07:36.296 31658.929 - 31860.578: 99.9934% ( 12) 00:07:36.296 31860.578 - 32062.228: 100.0000% ( 1) 00:07:36.296 00:07:36.296 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:36.296 ============================================================================== 00:07:36.296 Range in us Cumulative IO count 00:07:36.296 4587.520 - 4612.726: 0.0197% ( 3) 00:07:36.296 4612.726 - 4637.932: 0.0525% ( 5) 00:07:36.296 4637.932 - 4663.138: 0.0657% ( 2) 00:07:36.296 4663.138 - 4688.345: 0.0788% ( 2) 00:07:36.296 4688.345 - 4713.551: 0.0985% ( 3) 00:07:36.296 4713.551 - 4738.757: 0.1116% ( 2) 00:07:36.296 4738.757 - 4763.963: 0.1313% ( 3) 00:07:36.296 4763.963 - 4789.169: 0.1510% ( 3) 00:07:36.296 4789.169 - 4814.375: 0.1641% ( 2) 00:07:36.296 4814.375 - 4839.582: 0.1773% ( 2) 00:07:36.296 4839.582 - 4864.788: 0.1904% ( 2) 00:07:36.296 4864.788 - 4889.994: 0.2101% ( 3) 00:07:36.296 4889.994 - 4915.200: 0.2232% ( 2) 00:07:36.296 4915.200 - 4940.406: 0.2363% ( 2) 00:07:36.296 4940.406 - 4965.612: 0.2560% ( 3) 00:07:36.296 4965.612 - 4990.818: 0.2757% ( 3) 00:07:36.296 4990.818 - 5016.025: 0.2954% ( 3) 00:07:36.296 5016.025 - 5041.231: 0.3151% ( 3) 00:07:36.296 5041.231 - 5066.437: 0.3283% ( 2) 00:07:36.296 5066.437 - 5091.643: 0.3480% ( 3) 00:07:36.296 5091.643 - 5116.849: 0.3676% ( 3) 00:07:36.296 5116.849 - 5142.055: 0.3808% ( 2) 00:07:36.296 5142.055 - 5167.262: 0.4005% ( 3) 00:07:36.296 5167.262 - 5192.468: 0.4202% ( 3) 00:07:36.296 6024.271 - 6049.477: 0.4727% ( 8) 00:07:36.296 6049.477 - 6074.683: 0.5515% ( 12) 00:07:36.296 6074.683 - 6099.889: 0.6171% ( 10) 00:07:36.296 6099.889 - 6125.095: 0.7090% ( 14) 00:07:36.296 6125.095 - 6150.302: 0.9191% ( 32) 00:07:36.296 6150.302 - 6175.508: 1.2080% ( 44) 00:07:36.296 6175.508 - 6200.714: 1.6675% ( 70) 00:07:36.296 6200.714 - 6225.920: 2.1402% ( 72) 00:07:36.296 6225.920 - 6251.126: 2.8624% ( 110) 00:07:36.296 6251.126 - 6276.332: 3.8143% ( 145) 00:07:36.296 6276.332 - 6301.538: 5.1339% ( 201) 00:07:36.296 6301.538 - 6326.745: 6.5783% ( 220) 00:07:36.296 6326.745 - 6351.951: 8.1670% ( 242) 00:07:36.296 6351.951 - 6377.157: 9.5982% ( 218) 00:07:36.296 6377.157 - 6402.363: 11.0163% ( 216) 00:07:36.296 6402.363 - 6427.569: 12.9070% ( 288) 00:07:36.296 6427.569 - 6452.775: 14.6993% ( 273) 00:07:36.296 6452.775 - 6503.188: 18.4414% ( 570) 00:07:36.296 6503.188 - 6553.600: 22.1179% ( 560) 00:07:36.296 6553.600 - 6604.012: 25.7353% ( 551) 00:07:36.296 6604.012 - 6654.425: 29.7925% ( 618) 00:07:36.296 6654.425 - 6704.837: 33.7316% ( 600) 00:07:36.296 6704.837 - 6755.249: 37.7560% ( 613) 00:07:36.296 6755.249 - 6805.662: 41.6557% ( 594) 00:07:36.296 6805.662 - 6856.074: 45.4701% ( 581) 00:07:36.296 6856.074 - 6906.486: 49.0152% ( 540) 00:07:36.296 6906.486 - 6956.898: 52.5013% ( 531) 00:07:36.296 6956.898 - 7007.311: 55.4688% ( 452) 00:07:36.296 7007.311 - 7057.723: 58.1145% ( 403) 00:07:36.296 7057.723 - 7108.135: 60.3795% ( 345) 00:07:36.296 7108.135 - 7158.548: 61.9223% ( 235) 00:07:36.296 7158.548 - 7208.960: 62.8873% ( 147) 00:07:36.296 7208.960 - 7259.372: 63.5373% ( 99) 00:07:36.296 7259.372 - 7309.785: 63.9837% ( 68) 00:07:36.296 7309.785 - 7360.197: 64.3514% ( 56) 00:07:36.296 7360.197 - 7410.609: 64.6796% ( 50) 00:07:36.296 7410.609 - 7461.022: 64.9947% ( 48) 00:07:36.296 7461.022 - 7511.434: 65.2836% ( 44) 00:07:36.296 7511.434 - 7561.846: 65.5134% ( 35) 00:07:36.296 7561.846 - 7612.258: 65.7432% ( 35) 00:07:36.296 7612.258 - 7662.671: 65.9664% ( 34) 00:07:36.296 7662.671 - 7713.083: 66.1436% ( 27) 00:07:36.296 7713.083 - 7763.495: 66.3078% ( 25) 00:07:36.296 7763.495 - 7813.908: 66.4850% ( 27) 00:07:36.296 7813.908 - 7864.320: 66.6886% ( 31) 00:07:36.296 7864.320 - 7914.732: 66.8658% ( 27) 00:07:36.296 7914.732 - 7965.145: 67.0693% ( 31) 00:07:36.296 7965.145 - 8015.557: 67.2663% ( 30) 00:07:36.296 8015.557 - 8065.969: 67.4501% ( 28) 00:07:36.296 8065.969 - 8116.382: 67.6011% ( 23) 00:07:36.296 8116.382 - 8166.794: 67.7324% ( 20) 00:07:36.296 8166.794 - 8217.206: 67.8703% ( 21) 00:07:36.296 8217.206 - 8267.618: 68.0213% ( 23) 00:07:36.296 8267.618 - 8318.031: 68.1854% ( 25) 00:07:36.296 8318.031 - 8368.443: 68.2839% ( 15) 00:07:36.296 8368.443 - 8418.855: 68.3495% ( 10) 00:07:36.296 8418.855 - 8469.268: 68.4217% ( 11) 00:07:36.296 8469.268 - 8519.680: 68.4940% ( 11) 00:07:36.296 8519.680 - 8570.092: 68.5793% ( 13) 00:07:36.296 8570.092 - 8620.505: 68.7040% ( 19) 00:07:36.296 8620.505 - 8670.917: 68.8682% ( 25) 00:07:36.296 8670.917 - 8721.329: 68.9863% ( 18) 00:07:36.296 8721.329 - 8771.742: 69.1373% ( 23) 00:07:36.296 8771.742 - 8822.154: 69.2686% ( 20) 00:07:36.296 8822.154 - 8872.566: 69.3868% ( 18) 00:07:36.296 8872.566 - 8922.978: 69.5444% ( 24) 00:07:36.296 8922.978 - 8973.391: 69.6888% ( 22) 00:07:36.297 8973.391 - 9023.803: 69.8464% ( 24) 00:07:36.297 9023.803 - 9074.215: 69.9908% ( 22) 00:07:36.297 9074.215 - 9124.628: 70.1352% ( 22) 00:07:36.297 9124.628 - 9175.040: 70.2731% ( 21) 00:07:36.297 9175.040 - 9225.452: 70.4307% ( 24) 00:07:36.297 9225.452 - 9275.865: 70.6342% ( 31) 00:07:36.297 9275.865 - 9326.277: 70.8377% ( 31) 00:07:36.297 9326.277 - 9376.689: 71.0018% ( 25) 00:07:36.297 9376.689 - 9427.102: 71.1725% ( 26) 00:07:36.297 9427.102 - 9477.514: 71.3432% ( 26) 00:07:36.297 9477.514 - 9527.926: 71.4942% ( 23) 00:07:36.297 9527.926 - 9578.338: 71.6584% ( 25) 00:07:36.297 9578.338 - 9628.751: 71.8028% ( 22) 00:07:36.297 9628.751 - 9679.163: 71.9669% ( 25) 00:07:36.297 9679.163 - 9729.575: 72.0851% ( 18) 00:07:36.297 9729.575 - 9779.988: 72.2098% ( 19) 00:07:36.297 9779.988 - 9830.400: 72.3346% ( 19) 00:07:36.297 9830.400 - 9880.812: 72.4593% ( 19) 00:07:36.297 9880.812 - 9931.225: 72.5840% ( 19) 00:07:36.297 9931.225 - 9981.637: 72.7482% ( 25) 00:07:36.297 9981.637 - 10032.049: 72.8860% ( 21) 00:07:36.297 10032.049 - 10082.462: 73.1092% ( 34) 00:07:36.297 10082.462 - 10132.874: 73.2996% ( 29) 00:07:36.297 10132.874 - 10183.286: 73.4966% ( 30) 00:07:36.297 10183.286 - 10233.698: 73.6738% ( 27) 00:07:36.297 10233.698 - 10284.111: 73.8971% ( 34) 00:07:36.297 10284.111 - 10334.523: 74.0940% ( 30) 00:07:36.297 10334.523 - 10384.935: 74.2647% ( 26) 00:07:36.297 10384.935 - 10435.348: 74.5339% ( 41) 00:07:36.297 10435.348 - 10485.760: 74.8162% ( 43) 00:07:36.297 10485.760 - 10536.172: 75.1510% ( 51) 00:07:36.297 10536.172 - 10586.585: 75.4727% ( 49) 00:07:36.297 10586.585 - 10636.997: 75.8206% ( 53) 00:07:36.297 10636.997 - 10687.409: 76.2211% ( 61) 00:07:36.297 10687.409 - 10737.822: 76.5953% ( 57) 00:07:36.297 10737.822 - 10788.234: 76.9695% ( 57) 00:07:36.297 10788.234 - 10838.646: 77.3503% ( 58) 00:07:36.297 10838.646 - 10889.058: 77.7639% ( 63) 00:07:36.297 10889.058 - 10939.471: 78.1578% ( 60) 00:07:36.297 10939.471 - 10989.883: 78.5780% ( 64) 00:07:36.297 10989.883 - 11040.295: 79.0376% ( 70) 00:07:36.297 11040.295 - 11090.708: 79.5365% ( 76) 00:07:36.297 11090.708 - 11141.120: 80.0814% ( 83) 00:07:36.297 11141.120 - 11191.532: 80.5147% ( 66) 00:07:36.297 11191.532 - 11241.945: 81.0137% ( 76) 00:07:36.297 11241.945 - 11292.357: 81.5717% ( 85) 00:07:36.297 11292.357 - 11342.769: 82.0312% ( 70) 00:07:36.297 11342.769 - 11393.182: 82.4777% ( 68) 00:07:36.297 11393.182 - 11443.594: 82.9044% ( 65) 00:07:36.297 11443.594 - 11494.006: 83.3114% ( 62) 00:07:36.297 11494.006 - 11544.418: 83.7054% ( 60) 00:07:36.297 11544.418 - 11594.831: 84.0993% ( 60) 00:07:36.297 11594.831 - 11645.243: 84.5523% ( 69) 00:07:36.297 11645.243 - 11695.655: 84.9659% ( 63) 00:07:36.297 11695.655 - 11746.068: 85.3860% ( 64) 00:07:36.297 11746.068 - 11796.480: 85.8259% ( 67) 00:07:36.297 11796.480 - 11846.892: 86.2067% ( 58) 00:07:36.297 11846.892 - 11897.305: 86.6334% ( 65) 00:07:36.297 11897.305 - 11947.717: 87.0339% ( 61) 00:07:36.297 11947.717 - 11998.129: 87.4015% ( 56) 00:07:36.297 11998.129 - 12048.542: 87.7889% ( 59) 00:07:36.297 12048.542 - 12098.954: 88.1828% ( 60) 00:07:36.297 12098.954 - 12149.366: 88.7211% ( 82) 00:07:36.297 12149.366 - 12199.778: 89.1478% ( 65) 00:07:36.297 12199.778 - 12250.191: 89.4761% ( 50) 00:07:36.297 12250.191 - 12300.603: 89.7781% ( 46) 00:07:36.297 12300.603 - 12351.015: 90.0801% ( 46) 00:07:36.297 12351.015 - 12401.428: 90.4477% ( 56) 00:07:36.297 12401.428 - 12451.840: 90.7497% ( 46) 00:07:36.297 12451.840 - 12502.252: 91.0386% ( 44) 00:07:36.297 12502.252 - 12552.665: 91.3340% ( 45) 00:07:36.297 12552.665 - 12603.077: 91.6032% ( 41) 00:07:36.297 12603.077 - 12653.489: 91.8986% ( 45) 00:07:36.297 12653.489 - 12703.902: 92.2335% ( 51) 00:07:36.297 12703.902 - 12754.314: 92.6274% ( 60) 00:07:36.297 12754.314 - 12804.726: 92.9556% ( 50) 00:07:36.297 12804.726 - 12855.138: 93.2379% ( 43) 00:07:36.297 12855.138 - 12905.551: 93.4480% ( 32) 00:07:36.297 12905.551 - 13006.375: 94.0323% ( 89) 00:07:36.297 13006.375 - 13107.200: 94.5706% ( 82) 00:07:36.297 13107.200 - 13208.025: 95.1024% ( 81) 00:07:36.297 13208.025 - 13308.849: 95.6079% ( 77) 00:07:36.297 13308.849 - 13409.674: 96.1463% ( 82) 00:07:36.297 13409.674 - 13510.498: 96.5927% ( 68) 00:07:36.297 13510.498 - 13611.323: 97.0063% ( 63) 00:07:36.297 13611.323 - 13712.148: 97.4002% ( 60) 00:07:36.297 13712.148 - 13812.972: 97.6694% ( 41) 00:07:36.297 13812.972 - 13913.797: 97.8926% ( 34) 00:07:36.297 13913.797 - 14014.622: 98.0633% ( 26) 00:07:36.297 14014.622 - 14115.446: 98.2274% ( 25) 00:07:36.297 14115.446 - 14216.271: 98.3784% ( 23) 00:07:36.297 14216.271 - 14317.095: 98.5097% ( 20) 00:07:36.297 14317.095 - 14417.920: 98.6213% ( 17) 00:07:36.297 14417.920 - 14518.745: 98.6804% ( 9) 00:07:36.297 14518.745 - 14619.569: 98.7132% ( 5) 00:07:36.297 14619.569 - 14720.394: 98.7395% ( 4) 00:07:36.297 15526.991 - 15627.815: 98.7723% ( 5) 00:07:36.297 15627.815 - 15728.640: 98.7986% ( 4) 00:07:36.297 15728.640 - 15829.465: 98.8314% ( 5) 00:07:36.297 15829.465 - 15930.289: 98.8577% ( 4) 00:07:36.297 15930.289 - 16031.114: 98.8905% ( 5) 00:07:36.297 16031.114 - 16131.938: 98.9168% ( 4) 00:07:36.297 16131.938 - 16232.763: 98.9496% ( 5) 00:07:36.297 16232.763 - 16333.588: 98.9824% ( 5) 00:07:36.297 16333.588 - 16434.412: 99.0152% ( 5) 00:07:36.297 16434.412 - 16535.237: 99.0481% ( 5) 00:07:36.297 16535.237 - 16636.062: 99.0743% ( 4) 00:07:36.297 16636.062 - 16736.886: 99.1071% ( 5) 00:07:36.297 16736.886 - 16837.711: 99.1400% ( 5) 00:07:36.297 16837.711 - 16938.535: 99.1597% ( 3) 00:07:36.297 23189.662 - 23290.486: 99.1662% ( 1) 00:07:36.297 23290.486 - 23391.311: 99.1925% ( 4) 00:07:36.297 23391.311 - 23492.135: 99.2319% ( 6) 00:07:36.297 23492.135 - 23592.960: 99.2713% ( 6) 00:07:36.297 23592.960 - 23693.785: 99.3041% ( 5) 00:07:36.297 23693.785 - 23794.609: 99.3501% ( 7) 00:07:36.297 23794.609 - 23895.434: 99.3894% ( 6) 00:07:36.297 23895.434 - 23996.258: 99.4354% ( 7) 00:07:36.297 23996.258 - 24097.083: 99.4814% ( 7) 00:07:36.297 24097.083 - 24197.908: 99.5207% ( 6) 00:07:36.297 24197.908 - 24298.732: 99.5667% ( 7) 00:07:36.297 24298.732 - 24399.557: 99.5798% ( 2) 00:07:36.297 30247.385 - 30449.034: 99.6717% ( 14) 00:07:36.297 30449.034 - 30650.683: 99.7505% ( 12) 00:07:36.297 30650.683 - 30852.332: 99.8424% ( 14) 00:07:36.297 30852.332 - 31053.982: 99.9212% ( 12) 00:07:36.297 31053.982 - 31255.631: 100.0000% ( 12) 00:07:36.297 00:07:36.297 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:36.297 ============================================================================== 00:07:36.297 Range in us Cumulative IO count 00:07:36.297 4007.778 - 4032.985: 0.0460% ( 7) 00:07:36.297 4032.985 - 4058.191: 0.0591% ( 2) 00:07:36.297 4058.191 - 4083.397: 0.0722% ( 2) 00:07:36.297 4083.397 - 4108.603: 0.0919% ( 3) 00:07:36.297 4108.603 - 4133.809: 0.1050% ( 2) 00:07:36.297 4133.809 - 4159.015: 0.1247% ( 3) 00:07:36.297 4159.015 - 4184.222: 0.1379% ( 2) 00:07:36.297 4184.222 - 4209.428: 0.1510% ( 2) 00:07:36.297 4209.428 - 4234.634: 0.1707% ( 3) 00:07:36.297 4234.634 - 4259.840: 0.1838% ( 2) 00:07:36.297 4259.840 - 4285.046: 0.1970% ( 2) 00:07:36.297 4285.046 - 4310.252: 0.2166% ( 3) 00:07:36.297 4310.252 - 4335.458: 0.2363% ( 3) 00:07:36.297 4335.458 - 4360.665: 0.2495% ( 2) 00:07:36.297 4360.665 - 4385.871: 0.2692% ( 3) 00:07:36.297 4385.871 - 4411.077: 0.2889% ( 3) 00:07:36.297 4411.077 - 4436.283: 0.3086% ( 3) 00:07:36.297 4436.283 - 4461.489: 0.3217% ( 2) 00:07:36.297 4461.489 - 4486.695: 0.3414% ( 3) 00:07:36.297 4486.695 - 4511.902: 0.3611% ( 3) 00:07:36.297 4511.902 - 4537.108: 0.3742% ( 2) 00:07:36.297 4537.108 - 4562.314: 0.3939% ( 3) 00:07:36.297 4562.314 - 4587.520: 0.4136% ( 3) 00:07:36.297 4587.520 - 4612.726: 0.4202% ( 1) 00:07:36.297 5570.560 - 5595.766: 0.4333% ( 2) 00:07:36.297 5595.766 - 5620.972: 0.4464% ( 2) 00:07:36.297 5620.972 - 5646.178: 0.4661% ( 3) 00:07:36.297 5646.178 - 5671.385: 0.4793% ( 2) 00:07:36.297 5671.385 - 5696.591: 0.4924% ( 2) 00:07:36.297 5696.591 - 5721.797: 0.5186% ( 4) 00:07:36.297 5721.797 - 5747.003: 0.5318% ( 2) 00:07:36.297 5747.003 - 5772.209: 0.5449% ( 2) 00:07:36.297 5772.209 - 5797.415: 0.5646% ( 3) 00:07:36.297 5797.415 - 5822.622: 0.5777% ( 2) 00:07:36.297 5822.622 - 5847.828: 0.5909% ( 2) 00:07:36.297 5847.828 - 5873.034: 0.6106% ( 3) 00:07:36.297 5873.034 - 5898.240: 0.6237% ( 2) 00:07:36.297 5898.240 - 5923.446: 0.6434% ( 3) 00:07:36.297 5923.446 - 5948.652: 0.6565% ( 2) 00:07:36.297 5948.652 - 5973.858: 0.6893% ( 5) 00:07:36.297 5973.858 - 5999.065: 0.7222% ( 5) 00:07:36.297 5999.065 - 6024.271: 0.7812% ( 9) 00:07:36.297 6024.271 - 6049.477: 0.8272% ( 7) 00:07:36.297 6049.477 - 6074.683: 0.8929% ( 10) 00:07:36.297 6074.683 - 6099.889: 0.9388% ( 7) 00:07:36.297 6099.889 - 6125.095: 0.9979% ( 9) 00:07:36.297 6125.095 - 6150.302: 1.1949% ( 30) 00:07:36.297 6150.302 - 6175.508: 1.5034% ( 47) 00:07:36.297 6175.508 - 6200.714: 1.8514% ( 53) 00:07:36.297 6200.714 - 6225.920: 2.4357% ( 89) 00:07:36.297 6225.920 - 6251.126: 3.3416% ( 138) 00:07:36.297 6251.126 - 6276.332: 4.3067% ( 147) 00:07:36.297 6276.332 - 6301.538: 5.4228% ( 170) 00:07:36.297 6301.538 - 6326.745: 6.6373% ( 185) 00:07:36.297 6326.745 - 6351.951: 8.0685% ( 218) 00:07:36.297 6351.951 - 6377.157: 9.7886% ( 262) 00:07:36.297 6377.157 - 6402.363: 11.3971% ( 245) 00:07:36.297 6402.363 - 6427.569: 13.4454% ( 312) 00:07:36.297 6427.569 - 6452.775: 15.3361% ( 288) 00:07:36.297 6452.775 - 6503.188: 19.1964% ( 588) 00:07:36.297 6503.188 - 6553.600: 22.8466% ( 556) 00:07:36.297 6553.600 - 6604.012: 26.4640% ( 551) 00:07:36.298 6604.012 - 6654.425: 30.0551% ( 547) 00:07:36.298 6654.425 - 6704.837: 33.9154% ( 588) 00:07:36.298 6704.837 - 6755.249: 37.9661% ( 617) 00:07:36.298 6755.249 - 6805.662: 41.8527% ( 592) 00:07:36.298 6805.662 - 6856.074: 45.5423% ( 562) 00:07:36.298 6856.074 - 6906.486: 49.2910% ( 571) 00:07:36.298 6906.486 - 6956.898: 52.6392% ( 510) 00:07:36.298 6956.898 - 7007.311: 55.7511% ( 474) 00:07:36.298 7007.311 - 7057.723: 58.4165% ( 406) 00:07:36.298 7057.723 - 7108.135: 60.6027% ( 333) 00:07:36.298 7108.135 - 7158.548: 62.2899% ( 257) 00:07:36.298 7158.548 - 7208.960: 63.3403% ( 160) 00:07:36.298 7208.960 - 7259.372: 64.0953% ( 115) 00:07:36.298 7259.372 - 7309.785: 64.6008% ( 77) 00:07:36.298 7309.785 - 7360.197: 65.0210% ( 64) 00:07:36.298 7360.197 - 7410.609: 65.4018% ( 58) 00:07:36.298 7410.609 - 7461.022: 65.7760% ( 57) 00:07:36.298 7461.022 - 7511.434: 66.0780% ( 46) 00:07:36.298 7511.434 - 7561.846: 66.2421% ( 25) 00:07:36.298 7561.846 - 7612.258: 66.3734% ( 20) 00:07:36.298 7612.258 - 7662.671: 66.5113% ( 21) 00:07:36.298 7662.671 - 7713.083: 66.6426% ( 20) 00:07:36.298 7713.083 - 7763.495: 66.7739% ( 20) 00:07:36.298 7763.495 - 7813.908: 66.8789% ( 16) 00:07:36.298 7813.908 - 7864.320: 66.9643% ( 13) 00:07:36.298 7864.320 - 7914.732: 67.0234% ( 9) 00:07:36.298 7914.732 - 7965.145: 67.1022% ( 12) 00:07:36.298 7965.145 - 8015.557: 67.1678% ( 10) 00:07:36.298 8015.557 - 8065.969: 67.2532% ( 13) 00:07:36.298 8065.969 - 8116.382: 67.3385% ( 13) 00:07:36.298 8116.382 - 8166.794: 67.4632% ( 19) 00:07:36.298 8166.794 - 8217.206: 67.6208% ( 24) 00:07:36.298 8217.206 - 8267.618: 67.7390% ( 18) 00:07:36.298 8267.618 - 8318.031: 67.8768% ( 21) 00:07:36.298 8318.031 - 8368.443: 68.0213% ( 22) 00:07:36.298 8368.443 - 8418.855: 68.1723% ( 23) 00:07:36.298 8418.855 - 8469.268: 68.3167% ( 22) 00:07:36.298 8469.268 - 8519.680: 68.4677% ( 23) 00:07:36.298 8519.680 - 8570.092: 68.6187% ( 23) 00:07:36.298 8570.092 - 8620.505: 68.7697% ( 23) 00:07:36.298 8620.505 - 8670.917: 68.9207% ( 23) 00:07:36.298 8670.917 - 8721.329: 69.0717% ( 23) 00:07:36.298 8721.329 - 8771.742: 69.2096% ( 21) 00:07:36.298 8771.742 - 8822.154: 69.4262% ( 33) 00:07:36.298 8822.154 - 8872.566: 69.5706% ( 22) 00:07:36.298 8872.566 - 8922.978: 69.7085% ( 21) 00:07:36.298 8922.978 - 8973.391: 69.8529% ( 22) 00:07:36.298 8973.391 - 9023.803: 69.9908% ( 21) 00:07:36.298 9023.803 - 9074.215: 70.1221% ( 20) 00:07:36.298 9074.215 - 9124.628: 70.2665% ( 22) 00:07:36.298 9124.628 - 9175.040: 70.4438% ( 27) 00:07:36.298 9175.040 - 9225.452: 70.6276% ( 28) 00:07:36.298 9225.452 - 9275.865: 70.8246% ( 30) 00:07:36.298 9275.865 - 9326.277: 71.0018% ( 27) 00:07:36.298 9326.277 - 9376.689: 71.2119% ( 32) 00:07:36.298 9376.689 - 9427.102: 71.3629% ( 23) 00:07:36.298 9427.102 - 9477.514: 71.5008% ( 21) 00:07:36.298 9477.514 - 9527.926: 71.6584% ( 24) 00:07:36.298 9527.926 - 9578.338: 71.7831% ( 19) 00:07:36.298 9578.338 - 9628.751: 71.8947% ( 17) 00:07:36.298 9628.751 - 9679.163: 72.0194% ( 19) 00:07:36.298 9679.163 - 9729.575: 72.1376% ( 18) 00:07:36.298 9729.575 - 9779.988: 72.2492% ( 17) 00:07:36.298 9779.988 - 9830.400: 72.3739% ( 19) 00:07:36.298 9830.400 - 9880.812: 72.6300% ( 39) 00:07:36.298 9880.812 - 9931.225: 72.8007% ( 26) 00:07:36.298 9931.225 - 9981.637: 72.9517% ( 23) 00:07:36.298 9981.637 - 10032.049: 73.1486% ( 30) 00:07:36.298 10032.049 - 10082.462: 73.3390% ( 29) 00:07:36.298 10082.462 - 10132.874: 73.5557% ( 33) 00:07:36.298 10132.874 - 10183.286: 73.7461% ( 29) 00:07:36.298 10183.286 - 10233.698: 73.9299% ( 28) 00:07:36.298 10233.698 - 10284.111: 74.0809% ( 23) 00:07:36.298 10284.111 - 10334.523: 74.2844% ( 31) 00:07:36.298 10334.523 - 10384.935: 74.4879% ( 31) 00:07:36.298 10384.935 - 10435.348: 74.7571% ( 41) 00:07:36.298 10435.348 - 10485.760: 75.0591% ( 46) 00:07:36.298 10485.760 - 10536.172: 75.3480% ( 44) 00:07:36.298 10536.172 - 10586.585: 75.6434% ( 45) 00:07:36.298 10586.585 - 10636.997: 75.9848% ( 52) 00:07:36.298 10636.997 - 10687.409: 76.2999% ( 48) 00:07:36.298 10687.409 - 10737.822: 76.5822% ( 43) 00:07:36.298 10737.822 - 10788.234: 76.8776% ( 45) 00:07:36.298 10788.234 - 10838.646: 77.1731% ( 45) 00:07:36.298 10838.646 - 10889.058: 77.5341% ( 55) 00:07:36.298 10889.058 - 10939.471: 77.9018% ( 56) 00:07:36.298 10939.471 - 10989.883: 78.3023% ( 61) 00:07:36.298 10989.883 - 11040.295: 78.6436% ( 52) 00:07:36.298 11040.295 - 11090.708: 79.0310% ( 59) 00:07:36.298 11090.708 - 11141.120: 79.4183% ( 59) 00:07:36.298 11141.120 - 11191.532: 79.8845% ( 71) 00:07:36.298 11191.532 - 11241.945: 80.3768% ( 75) 00:07:36.298 11241.945 - 11292.357: 80.8627% ( 74) 00:07:36.298 11292.357 - 11342.769: 81.3616% ( 76) 00:07:36.298 11342.769 - 11393.182: 81.8540% ( 75) 00:07:36.298 11393.182 - 11443.594: 82.3398% ( 74) 00:07:36.298 11443.594 - 11494.006: 82.7928% ( 69) 00:07:36.298 11494.006 - 11544.418: 83.2589% ( 71) 00:07:36.298 11544.418 - 11594.831: 83.7119% ( 69) 00:07:36.298 11594.831 - 11645.243: 84.2240% ( 78) 00:07:36.298 11645.243 - 11695.655: 84.7164% ( 75) 00:07:36.298 11695.655 - 11746.068: 85.1956% ( 73) 00:07:36.298 11746.068 - 11796.480: 85.7077% ( 78) 00:07:36.298 11796.480 - 11846.892: 86.2001% ( 75) 00:07:36.298 11846.892 - 11897.305: 86.6991% ( 76) 00:07:36.298 11897.305 - 11947.717: 87.1849% ( 74) 00:07:36.298 11947.717 - 11998.129: 87.6576% ( 72) 00:07:36.298 11998.129 - 12048.542: 87.9989% ( 52) 00:07:36.298 12048.542 - 12098.954: 88.3666% ( 56) 00:07:36.298 12098.954 - 12149.366: 88.7211% ( 54) 00:07:36.298 12149.366 - 12199.778: 89.0625% ( 52) 00:07:36.298 12199.778 - 12250.191: 89.3908% ( 50) 00:07:36.298 12250.191 - 12300.603: 89.6534% ( 40) 00:07:36.298 12300.603 - 12351.015: 89.9882% ( 51) 00:07:36.298 12351.015 - 12401.428: 90.3493% ( 55) 00:07:36.298 12401.428 - 12451.840: 90.6972% ( 53) 00:07:36.298 12451.840 - 12502.252: 90.9730% ( 42) 00:07:36.298 12502.252 - 12552.665: 91.2553% ( 43) 00:07:36.298 12552.665 - 12603.077: 91.5901% ( 51) 00:07:36.298 12603.077 - 12653.489: 91.9577% ( 56) 00:07:36.298 12653.489 - 12703.902: 92.3319% ( 57) 00:07:36.298 12703.902 - 12754.314: 92.6996% ( 56) 00:07:36.298 12754.314 - 12804.726: 92.9819% ( 43) 00:07:36.298 12804.726 - 12855.138: 93.2970% ( 48) 00:07:36.298 12855.138 - 12905.551: 93.5530% ( 39) 00:07:36.298 12905.551 - 13006.375: 94.1045% ( 84) 00:07:36.298 13006.375 - 13107.200: 94.6560% ( 84) 00:07:36.298 13107.200 - 13208.025: 95.2009% ( 83) 00:07:36.298 13208.025 - 13308.849: 95.6867% ( 74) 00:07:36.298 13308.849 - 13409.674: 96.1069% ( 64) 00:07:36.298 13409.674 - 13510.498: 96.5074% ( 61) 00:07:36.298 13510.498 - 13611.323: 96.9341% ( 65) 00:07:36.298 13611.323 - 13712.148: 97.2623% ( 50) 00:07:36.298 13712.148 - 13812.972: 97.5184% ( 39) 00:07:36.298 13812.972 - 13913.797: 97.7350% ( 33) 00:07:36.298 13913.797 - 14014.622: 97.8926% ( 24) 00:07:36.298 14014.622 - 14115.446: 98.0699% ( 27) 00:07:36.298 14115.446 - 14216.271: 98.1815% ( 17) 00:07:36.298 14216.271 - 14317.095: 98.2602% ( 12) 00:07:36.298 14317.095 - 14417.920: 98.3193% ( 9) 00:07:36.298 14417.920 - 14518.745: 98.3522% ( 5) 00:07:36.298 14518.745 - 14619.569: 98.3784% ( 4) 00:07:36.298 14619.569 - 14720.394: 98.4112% ( 5) 00:07:36.298 14720.394 - 14821.218: 98.4441% ( 5) 00:07:36.298 14821.218 - 14922.043: 98.4769% ( 5) 00:07:36.298 14922.043 - 15022.868: 98.5032% ( 4) 00:07:36.298 15022.868 - 15123.692: 98.5557% ( 8) 00:07:36.298 15123.692 - 15224.517: 98.6279% ( 11) 00:07:36.298 15224.517 - 15325.342: 98.7001% ( 11) 00:07:36.298 15325.342 - 15426.166: 98.7789% ( 12) 00:07:36.298 15426.166 - 15526.991: 98.8577% ( 12) 00:07:36.298 15526.991 - 15627.815: 98.9364% ( 12) 00:07:36.298 15627.815 - 15728.640: 99.0152% ( 12) 00:07:36.298 15728.640 - 15829.465: 99.0874% ( 11) 00:07:36.298 15829.465 - 15930.289: 99.1334% ( 7) 00:07:36.298 15930.289 - 16031.114: 99.1597% ( 4) 00:07:36.298 23693.785 - 23794.609: 99.1794% ( 3) 00:07:36.298 23794.609 - 23895.434: 99.2188% ( 6) 00:07:36.298 23895.434 - 23996.258: 99.2516% ( 5) 00:07:36.298 23996.258 - 24097.083: 99.2910% ( 6) 00:07:36.298 24097.083 - 24197.908: 99.3369% ( 7) 00:07:36.298 24197.908 - 24298.732: 99.3829% ( 7) 00:07:36.298 24298.732 - 24399.557: 99.4223% ( 6) 00:07:36.298 24399.557 - 24500.382: 99.4682% ( 7) 00:07:36.298 24500.382 - 24601.206: 99.5142% ( 7) 00:07:36.298 24601.206 - 24702.031: 99.5536% ( 6) 00:07:36.298 24702.031 - 24802.855: 99.5798% ( 4) 00:07:36.298 29844.086 - 30045.735: 99.6127% ( 5) 00:07:36.298 30045.735 - 30247.385: 99.6980% ( 13) 00:07:36.298 30247.385 - 30449.034: 99.7834% ( 13) 00:07:36.298 30449.034 - 30650.683: 99.8753% ( 14) 00:07:36.298 30650.683 - 30852.332: 99.9606% ( 13) 00:07:36.298 30852.332 - 31053.982: 100.0000% ( 6) 00:07:36.298 00:07:36.298 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:36.298 ============================================================================== 00:07:36.298 Range in us Cumulative IO count 00:07:36.298 3780.923 - 3806.129: 0.0328% ( 5) 00:07:36.298 3806.129 - 3831.335: 0.0394% ( 1) 00:07:36.298 3831.335 - 3856.542: 0.0591% ( 3) 00:07:36.298 3856.542 - 3881.748: 0.0722% ( 2) 00:07:36.298 3881.748 - 3906.954: 0.0919% ( 3) 00:07:36.298 3906.954 - 3932.160: 0.1050% ( 2) 00:07:36.298 3932.160 - 3957.366: 0.1182% ( 2) 00:07:36.298 3957.366 - 3982.572: 0.1379% ( 3) 00:07:36.298 3982.572 - 4007.778: 0.1510% ( 2) 00:07:36.298 4007.778 - 4032.985: 0.1641% ( 2) 00:07:36.298 4032.985 - 4058.191: 0.1838% ( 3) 00:07:36.298 4058.191 - 4083.397: 0.2035% ( 3) 00:07:36.298 4083.397 - 4108.603: 0.2166% ( 2) 00:07:36.298 4108.603 - 4133.809: 0.2363% ( 3) 00:07:36.299 4133.809 - 4159.015: 0.2560% ( 3) 00:07:36.299 4159.015 - 4184.222: 0.2692% ( 2) 00:07:36.299 4184.222 - 4209.428: 0.2889% ( 3) 00:07:36.299 4209.428 - 4234.634: 0.3020% ( 2) 00:07:36.299 4234.634 - 4259.840: 0.3217% ( 3) 00:07:36.299 4259.840 - 4285.046: 0.3414% ( 3) 00:07:36.299 4285.046 - 4310.252: 0.3611% ( 3) 00:07:36.299 4310.252 - 4335.458: 0.3742% ( 2) 00:07:36.299 4335.458 - 4360.665: 0.3939% ( 3) 00:07:36.299 4360.665 - 4385.871: 0.4136% ( 3) 00:07:36.299 4385.871 - 4411.077: 0.4202% ( 1) 00:07:36.299 5419.323 - 5444.529: 0.4333% ( 2) 00:07:36.299 5444.529 - 5469.735: 0.4530% ( 3) 00:07:36.299 5469.735 - 5494.942: 0.4661% ( 2) 00:07:36.299 5494.942 - 5520.148: 0.4858% ( 3) 00:07:36.299 5520.148 - 5545.354: 0.4989% ( 2) 00:07:36.299 5545.354 - 5570.560: 0.5252% ( 4) 00:07:36.299 5570.560 - 5595.766: 0.5449% ( 3) 00:07:36.299 5595.766 - 5620.972: 0.5646% ( 3) 00:07:36.299 5620.972 - 5646.178: 0.5843% ( 3) 00:07:36.299 5646.178 - 5671.385: 0.6106% ( 4) 00:07:36.299 5671.385 - 5696.591: 0.6303% ( 3) 00:07:36.299 5696.591 - 5721.797: 0.6499% ( 3) 00:07:36.299 5721.797 - 5747.003: 0.6696% ( 3) 00:07:36.299 5747.003 - 5772.209: 0.6893% ( 3) 00:07:36.299 5772.209 - 5797.415: 0.7025% ( 2) 00:07:36.299 5797.415 - 5822.622: 0.7222% ( 3) 00:07:36.299 5822.622 - 5847.828: 0.7419% ( 3) 00:07:36.299 5847.828 - 5873.034: 0.7616% ( 3) 00:07:36.299 5873.034 - 5898.240: 0.7812% ( 3) 00:07:36.299 5898.240 - 5923.446: 0.7944% ( 2) 00:07:36.299 5923.446 - 5948.652: 0.8141% ( 3) 00:07:36.299 5948.652 - 5973.858: 0.8206% ( 1) 00:07:36.299 5973.858 - 5999.065: 0.8403% ( 3) 00:07:36.299 6024.271 - 6049.477: 0.8929% ( 8) 00:07:36.299 6049.477 - 6074.683: 0.9979% ( 16) 00:07:36.299 6074.683 - 6099.889: 1.0767% ( 12) 00:07:36.299 6099.889 - 6125.095: 1.1620% ( 13) 00:07:36.299 6125.095 - 6150.302: 1.2671% ( 16) 00:07:36.299 6150.302 - 6175.508: 1.4772% ( 32) 00:07:36.299 6175.508 - 6200.714: 1.7660% ( 44) 00:07:36.299 6200.714 - 6225.920: 2.2650% ( 76) 00:07:36.299 6225.920 - 6251.126: 2.8821% ( 94) 00:07:36.299 6251.126 - 6276.332: 3.8078% ( 141) 00:07:36.299 6276.332 - 6301.538: 4.8976% ( 166) 00:07:36.299 6301.538 - 6326.745: 6.5060% ( 245) 00:07:36.299 6326.745 - 6351.951: 8.1670% ( 253) 00:07:36.299 6351.951 - 6377.157: 9.9002% ( 264) 00:07:36.299 6377.157 - 6402.363: 11.6728% ( 270) 00:07:36.299 6402.363 - 6427.569: 13.3929% ( 262) 00:07:36.299 6427.569 - 6452.775: 15.0735% ( 256) 00:07:36.299 6452.775 - 6503.188: 18.6384% ( 543) 00:07:36.299 6503.188 - 6553.600: 22.3674% ( 568) 00:07:36.299 6553.600 - 6604.012: 26.1292% ( 573) 00:07:36.299 6604.012 - 6654.425: 29.8188% ( 562) 00:07:36.299 6654.425 - 6704.837: 33.7382% ( 597) 00:07:36.299 6704.837 - 6755.249: 37.6641% ( 598) 00:07:36.299 6755.249 - 6805.662: 41.4785% ( 581) 00:07:36.299 6805.662 - 6856.074: 45.3256% ( 586) 00:07:36.299 6856.074 - 6906.486: 49.1794% ( 587) 00:07:36.299 6906.486 - 6956.898: 52.6983% ( 536) 00:07:36.299 6956.898 - 7007.311: 55.8495% ( 480) 00:07:36.299 7007.311 - 7057.723: 58.5544% ( 412) 00:07:36.299 7057.723 - 7108.135: 60.7734% ( 338) 00:07:36.299 7108.135 - 7158.548: 62.3227% ( 236) 00:07:36.299 7158.548 - 7208.960: 63.3994% ( 164) 00:07:36.299 7208.960 - 7259.372: 64.1019% ( 107) 00:07:36.299 7259.372 - 7309.785: 64.6205% ( 79) 00:07:36.299 7309.785 - 7360.197: 65.0867% ( 71) 00:07:36.299 7360.197 - 7410.609: 65.4543% ( 56) 00:07:36.299 7410.609 - 7461.022: 65.7694% ( 48) 00:07:36.299 7461.022 - 7511.434: 66.0123% ( 37) 00:07:36.299 7511.434 - 7561.846: 66.2290% ( 33) 00:07:36.299 7561.846 - 7612.258: 66.4325% ( 31) 00:07:36.299 7612.258 - 7662.671: 66.6163% ( 28) 00:07:36.299 7662.671 - 7713.083: 66.7345% ( 18) 00:07:36.299 7713.083 - 7763.495: 66.8527% ( 18) 00:07:36.299 7763.495 - 7813.908: 66.9315% ( 12) 00:07:36.299 7813.908 - 7864.320: 67.0168% ( 13) 00:07:36.299 7864.320 - 7914.732: 67.0628% ( 7) 00:07:36.299 7914.732 - 7965.145: 67.1284% ( 10) 00:07:36.299 7965.145 - 8015.557: 67.2072% ( 12) 00:07:36.299 8015.557 - 8065.969: 67.2925% ( 13) 00:07:36.299 8065.969 - 8116.382: 67.3648% ( 11) 00:07:36.299 8116.382 - 8166.794: 67.4238% ( 9) 00:07:36.299 8166.794 - 8217.206: 67.5092% ( 13) 00:07:36.299 8217.206 - 8267.618: 67.5748% ( 10) 00:07:36.299 8267.618 - 8318.031: 67.6405% ( 10) 00:07:36.299 8318.031 - 8368.443: 67.7061% ( 10) 00:07:36.299 8368.443 - 8418.855: 67.8834% ( 27) 00:07:36.299 8418.855 - 8469.268: 67.9950% ( 17) 00:07:36.299 8469.268 - 8519.680: 68.1394% ( 22) 00:07:36.299 8519.680 - 8570.092: 68.2904% ( 23) 00:07:36.299 8570.092 - 8620.505: 68.4152% ( 19) 00:07:36.299 8620.505 - 8670.917: 68.5727% ( 24) 00:07:36.299 8670.917 - 8721.329: 68.7303% ( 24) 00:07:36.299 8721.329 - 8771.742: 68.9076% ( 27) 00:07:36.299 8771.742 - 8822.154: 69.1242% ( 33) 00:07:36.299 8822.154 - 8872.566: 69.3540% ( 35) 00:07:36.299 8872.566 - 8922.978: 69.5444% ( 29) 00:07:36.299 8922.978 - 8973.391: 69.7348% ( 29) 00:07:36.299 8973.391 - 9023.803: 69.9055% ( 26) 00:07:36.299 9023.803 - 9074.215: 70.1484% ( 37) 00:07:36.299 9074.215 - 9124.628: 70.3388% ( 29) 00:07:36.299 9124.628 - 9175.040: 70.5029% ( 25) 00:07:36.299 9175.040 - 9225.452: 70.6670% ( 25) 00:07:36.299 9225.452 - 9275.865: 70.8246% ( 24) 00:07:36.299 9275.865 - 9326.277: 70.9690% ( 22) 00:07:36.299 9326.277 - 9376.689: 71.1069% ( 21) 00:07:36.299 9376.689 - 9427.102: 71.2316% ( 19) 00:07:36.299 9427.102 - 9477.514: 71.4220% ( 29) 00:07:36.299 9477.514 - 9527.926: 71.5796% ( 24) 00:07:36.299 9527.926 - 9578.338: 71.7503% ( 26) 00:07:36.299 9578.338 - 9628.751: 71.9144% ( 25) 00:07:36.299 9628.751 - 9679.163: 72.0982% ( 28) 00:07:36.299 9679.163 - 9729.575: 72.2886% ( 29) 00:07:36.299 9729.575 - 9779.988: 72.4659% ( 27) 00:07:36.299 9779.988 - 9830.400: 72.6759% ( 32) 00:07:36.299 9830.400 - 9880.812: 72.9254% ( 38) 00:07:36.299 9880.812 - 9931.225: 73.1355% ( 32) 00:07:36.299 9931.225 - 9981.637: 73.2931% ( 24) 00:07:36.299 9981.637 - 10032.049: 73.4638% ( 26) 00:07:36.299 10032.049 - 10082.462: 73.5951% ( 20) 00:07:36.299 10082.462 - 10132.874: 73.7198% ( 19) 00:07:36.299 10132.874 - 10183.286: 73.9168% ( 30) 00:07:36.299 10183.286 - 10233.698: 74.1006% ( 28) 00:07:36.299 10233.698 - 10284.111: 74.2581% ( 24) 00:07:36.299 10284.111 - 10334.523: 74.3960% ( 21) 00:07:36.299 10334.523 - 10384.935: 74.5536% ( 24) 00:07:36.299 10384.935 - 10435.348: 74.6980% ( 22) 00:07:36.299 10435.348 - 10485.760: 74.8424% ( 22) 00:07:36.299 10485.760 - 10536.172: 75.0131% ( 26) 00:07:36.299 10536.172 - 10586.585: 75.2626% ( 38) 00:07:36.299 10586.585 - 10636.997: 75.5186% ( 39) 00:07:36.299 10636.997 - 10687.409: 75.7484% ( 35) 00:07:36.299 10687.409 - 10737.822: 76.0504% ( 46) 00:07:36.299 10737.822 - 10788.234: 76.2999% ( 38) 00:07:36.299 10788.234 - 10838.646: 76.5428% ( 37) 00:07:36.299 10838.646 - 10889.058: 76.9170% ( 57) 00:07:36.299 10889.058 - 10939.471: 77.3109% ( 60) 00:07:36.299 10939.471 - 10989.883: 77.7377% ( 65) 00:07:36.299 10989.883 - 11040.295: 78.2103% ( 72) 00:07:36.299 11040.295 - 11090.708: 78.6765% ( 71) 00:07:36.299 11090.708 - 11141.120: 79.0901% ( 63) 00:07:36.299 11141.120 - 11191.532: 79.5693% ( 73) 00:07:36.299 11191.532 - 11241.945: 80.0223% ( 69) 00:07:36.299 11241.945 - 11292.357: 80.5278% ( 77) 00:07:36.299 11292.357 - 11342.769: 81.1253% ( 91) 00:07:36.299 11342.769 - 11393.182: 81.7227% ( 91) 00:07:36.299 11393.182 - 11443.594: 82.2807% ( 85) 00:07:36.299 11443.594 - 11494.006: 82.8782% ( 91) 00:07:36.299 11494.006 - 11544.418: 83.4428% ( 86) 00:07:36.299 11544.418 - 11594.831: 84.1387% ( 106) 00:07:36.299 11594.831 - 11645.243: 84.7361% ( 91) 00:07:36.299 11645.243 - 11695.655: 85.3269% ( 90) 00:07:36.299 11695.655 - 11746.068: 85.8850% ( 85) 00:07:36.299 11746.068 - 11796.480: 86.3774% ( 75) 00:07:36.299 11796.480 - 11846.892: 86.8304% ( 69) 00:07:36.299 11846.892 - 11897.305: 87.3030% ( 72) 00:07:36.299 11897.305 - 11947.717: 87.6838% ( 58) 00:07:36.299 11947.717 - 11998.129: 88.1499% ( 71) 00:07:36.299 11998.129 - 12048.542: 88.5767% ( 65) 00:07:36.300 12048.542 - 12098.954: 89.0756% ( 76) 00:07:36.300 12098.954 - 12149.366: 89.4433% ( 56) 00:07:36.300 12149.366 - 12199.778: 89.8175% ( 57) 00:07:36.300 12199.778 - 12250.191: 90.1589% ( 52) 00:07:36.300 12250.191 - 12300.603: 90.5003% ( 52) 00:07:36.300 12300.603 - 12351.015: 90.8023% ( 46) 00:07:36.300 12351.015 - 12401.428: 91.0911% ( 44) 00:07:36.300 12401.428 - 12451.840: 91.3997% ( 47) 00:07:36.300 12451.840 - 12502.252: 91.6951% ( 45) 00:07:36.300 12502.252 - 12552.665: 91.9840% ( 44) 00:07:36.300 12552.665 - 12603.077: 92.2663% ( 43) 00:07:36.300 12603.077 - 12653.489: 92.5223% ( 39) 00:07:36.300 12653.489 - 12703.902: 92.7521% ( 35) 00:07:36.300 12703.902 - 12754.314: 93.0344% ( 43) 00:07:36.300 12754.314 - 12804.726: 93.2970% ( 40) 00:07:36.300 12804.726 - 12855.138: 93.5530% ( 39) 00:07:36.300 12855.138 - 12905.551: 93.8222% ( 41) 00:07:36.300 12905.551 - 13006.375: 94.3212% ( 76) 00:07:36.300 13006.375 - 13107.200: 94.7216% ( 61) 00:07:36.300 13107.200 - 13208.025: 95.0565% ( 51) 00:07:36.300 13208.025 - 13308.849: 95.4372% ( 58) 00:07:36.300 13308.849 - 13409.674: 95.8902% ( 69) 00:07:36.300 13409.674 - 13510.498: 96.3892% ( 76) 00:07:36.300 13510.498 - 13611.323: 96.8028% ( 63) 00:07:36.300 13611.323 - 13712.148: 97.1376% ( 51) 00:07:36.300 13712.148 - 13812.972: 97.3871% ( 38) 00:07:36.300 13812.972 - 13913.797: 97.5972% ( 32) 00:07:36.300 13913.797 - 14014.622: 97.7941% ( 30) 00:07:36.300 14014.622 - 14115.446: 97.9976% ( 31) 00:07:36.300 14115.446 - 14216.271: 98.0895% ( 14) 00:07:36.300 14216.271 - 14317.095: 98.1618% ( 11) 00:07:36.300 14317.095 - 14417.920: 98.2274% ( 10) 00:07:36.300 14417.920 - 14518.745: 98.3062% ( 12) 00:07:36.300 14518.745 - 14619.569: 98.3193% ( 2) 00:07:36.300 14619.569 - 14720.394: 98.3587% ( 6) 00:07:36.300 14720.394 - 14821.218: 98.4638% ( 16) 00:07:36.300 14821.218 - 14922.043: 98.5491% ( 13) 00:07:36.300 14922.043 - 15022.868: 98.6410% ( 14) 00:07:36.300 15022.868 - 15123.692: 98.6870% ( 7) 00:07:36.300 15123.692 - 15224.517: 98.7526% ( 10) 00:07:36.300 15224.517 - 15325.342: 98.8248% ( 11) 00:07:36.300 15325.342 - 15426.166: 98.8971% ( 11) 00:07:36.300 15426.166 - 15526.991: 98.9627% ( 10) 00:07:36.300 15526.991 - 15627.815: 98.9890% ( 4) 00:07:36.300 15627.815 - 15728.640: 99.0152% ( 4) 00:07:36.300 15728.640 - 15829.465: 99.0481% ( 5) 00:07:36.300 15829.465 - 15930.289: 99.0809% ( 5) 00:07:36.300 15930.289 - 16031.114: 99.1137% ( 5) 00:07:36.300 16031.114 - 16131.938: 99.1465% ( 5) 00:07:36.300 16131.938 - 16232.763: 99.1597% ( 2) 00:07:36.300 23391.311 - 23492.135: 99.1991% ( 6) 00:07:36.300 23492.135 - 23592.960: 99.2384% ( 6) 00:07:36.300 23592.960 - 23693.785: 99.2844% ( 7) 00:07:36.300 23693.785 - 23794.609: 99.3238% ( 6) 00:07:36.300 23794.609 - 23895.434: 99.3632% ( 6) 00:07:36.300 23895.434 - 23996.258: 99.4091% ( 7) 00:07:36.300 23996.258 - 24097.083: 99.4485% ( 6) 00:07:36.300 24097.083 - 24197.908: 99.4879% ( 6) 00:07:36.300 24197.908 - 24298.732: 99.5339% ( 7) 00:07:36.300 24298.732 - 24399.557: 99.5733% ( 6) 00:07:36.300 24399.557 - 24500.382: 99.5798% ( 1) 00:07:36.300 29239.138 - 29440.788: 99.6061% ( 4) 00:07:36.300 29440.788 - 29642.437: 99.6980% ( 14) 00:07:36.300 29642.437 - 29844.086: 99.7834% ( 13) 00:07:36.300 29844.086 - 30045.735: 99.8687% ( 13) 00:07:36.300 30045.735 - 30247.385: 99.9540% ( 13) 00:07:36.300 30247.385 - 30449.034: 100.0000% ( 7) 00:07:36.300 00:07:36.300 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:36.300 ============================================================================== 00:07:36.300 Range in us Cumulative IO count 00:07:36.300 3604.480 - 3629.686: 0.0197% ( 3) 00:07:36.300 3629.686 - 3654.892: 0.0657% ( 7) 00:07:36.300 3680.098 - 3705.305: 0.0788% ( 2) 00:07:36.300 3705.305 - 3730.511: 0.0985% ( 3) 00:07:36.300 3730.511 - 3755.717: 0.1182% ( 3) 00:07:36.300 3755.717 - 3780.923: 0.1379% ( 3) 00:07:36.300 3780.923 - 3806.129: 0.1510% ( 2) 00:07:36.300 3806.129 - 3831.335: 0.1707% ( 3) 00:07:36.300 3831.335 - 3856.542: 0.1904% ( 3) 00:07:36.300 3856.542 - 3881.748: 0.2035% ( 2) 00:07:36.300 3881.748 - 3906.954: 0.2232% ( 3) 00:07:36.300 3906.954 - 3932.160: 0.2429% ( 3) 00:07:36.300 3932.160 - 3957.366: 0.2560% ( 2) 00:07:36.300 3957.366 - 3982.572: 0.2757% ( 3) 00:07:36.300 3982.572 - 4007.778: 0.2954% ( 3) 00:07:36.300 4007.778 - 4032.985: 0.3086% ( 2) 00:07:36.300 4032.985 - 4058.191: 0.3283% ( 3) 00:07:36.300 4058.191 - 4083.397: 0.3480% ( 3) 00:07:36.300 4083.397 - 4108.603: 0.3611% ( 2) 00:07:36.300 4108.603 - 4133.809: 0.3808% ( 3) 00:07:36.300 4133.809 - 4159.015: 0.4005% ( 3) 00:07:36.300 4159.015 - 4184.222: 0.4202% ( 3) 00:07:36.300 5242.880 - 5268.086: 0.4661% ( 7) 00:07:36.300 5268.086 - 5293.292: 0.5055% ( 6) 00:07:36.300 5293.292 - 5318.498: 0.5121% ( 1) 00:07:36.300 5318.498 - 5343.705: 0.5252% ( 2) 00:07:36.300 5343.705 - 5368.911: 0.5318% ( 1) 00:07:36.300 5368.911 - 5394.117: 0.5449% ( 2) 00:07:36.300 5394.117 - 5419.323: 0.5646% ( 3) 00:07:36.300 5419.323 - 5444.529: 0.5777% ( 2) 00:07:36.300 5444.529 - 5469.735: 0.6040% ( 4) 00:07:36.300 5469.735 - 5494.942: 0.6368% ( 5) 00:07:36.300 5494.942 - 5520.148: 0.6499% ( 2) 00:07:36.300 5520.148 - 5545.354: 0.6696% ( 3) 00:07:36.300 5545.354 - 5570.560: 0.6828% ( 2) 00:07:36.300 5570.560 - 5595.766: 0.7025% ( 3) 00:07:36.300 5595.766 - 5620.972: 0.7090% ( 1) 00:07:36.300 5620.972 - 5646.178: 0.7222% ( 2) 00:07:36.300 5646.178 - 5671.385: 0.7419% ( 3) 00:07:36.300 5671.385 - 5696.591: 0.7616% ( 3) 00:07:36.300 5696.591 - 5721.797: 0.7747% ( 2) 00:07:36.300 5721.797 - 5747.003: 0.7944% ( 3) 00:07:36.300 5747.003 - 5772.209: 0.8141% ( 3) 00:07:36.300 5772.209 - 5797.415: 0.8272% ( 2) 00:07:36.300 5797.415 - 5822.622: 0.8403% ( 2) 00:07:36.300 6024.271 - 6049.477: 0.8469% ( 1) 00:07:36.300 6049.477 - 6074.683: 0.8666% ( 3) 00:07:36.300 6074.683 - 6099.889: 0.8797% ( 2) 00:07:36.300 6099.889 - 6125.095: 0.9191% ( 6) 00:07:36.300 6125.095 - 6150.302: 1.0242% ( 16) 00:07:36.300 6150.302 - 6175.508: 1.2408% ( 33) 00:07:36.300 6175.508 - 6200.714: 1.5822% ( 52) 00:07:36.300 6200.714 - 6225.920: 1.9827% ( 61) 00:07:36.300 6225.920 - 6251.126: 2.7770% ( 121) 00:07:36.300 6251.126 - 6276.332: 3.8275% ( 160) 00:07:36.300 6276.332 - 6301.538: 5.0683% ( 189) 00:07:36.300 6301.538 - 6326.745: 6.5454% ( 225) 00:07:36.300 6326.745 - 6351.951: 8.1998% ( 252) 00:07:36.300 6351.951 - 6377.157: 9.7755% ( 240) 00:07:36.300 6377.157 - 6402.363: 11.5349% ( 268) 00:07:36.300 6402.363 - 6427.569: 13.4322% ( 289) 00:07:36.300 6427.569 - 6452.775: 15.2442% ( 276) 00:07:36.300 6452.775 - 6503.188: 18.7631% ( 536) 00:07:36.300 6503.188 - 6553.600: 22.3608% ( 548) 00:07:36.300 6553.600 - 6604.012: 26.2211% ( 588) 00:07:36.300 6604.012 - 6654.425: 29.9895% ( 574) 00:07:36.300 6654.425 - 6704.837: 33.7973% ( 580) 00:07:36.300 6704.837 - 6755.249: 37.9070% ( 626) 00:07:36.300 6755.249 - 6805.662: 41.8264% ( 597) 00:07:36.300 6805.662 - 6856.074: 45.6014% ( 575) 00:07:36.300 6856.074 - 6906.486: 49.3238% ( 567) 00:07:36.300 6906.486 - 6956.898: 52.7770% ( 526) 00:07:36.300 6956.898 - 7007.311: 55.9349% ( 481) 00:07:36.300 7007.311 - 7057.723: 58.5741% ( 402) 00:07:36.300 7057.723 - 7108.135: 60.6618% ( 318) 00:07:36.300 7108.135 - 7158.548: 62.2046% ( 235) 00:07:36.300 7158.548 - 7208.960: 63.2419% ( 158) 00:07:36.300 7208.960 - 7259.372: 63.8787% ( 97) 00:07:36.300 7259.372 - 7309.785: 64.3382% ( 70) 00:07:36.300 7309.785 - 7360.197: 64.7781% ( 67) 00:07:36.300 7360.197 - 7410.609: 65.1786% ( 61) 00:07:36.300 7410.609 - 7461.022: 65.5134% ( 51) 00:07:36.300 7461.022 - 7511.434: 65.7826% ( 41) 00:07:36.300 7511.434 - 7561.846: 66.0320% ( 38) 00:07:36.300 7561.846 - 7612.258: 66.2356% ( 31) 00:07:36.300 7612.258 - 7662.671: 66.3734% ( 21) 00:07:36.300 7662.671 - 7713.083: 66.5047% ( 20) 00:07:36.300 7713.083 - 7763.495: 66.7017% ( 30) 00:07:36.300 7763.495 - 7813.908: 66.8921% ( 29) 00:07:36.300 7813.908 - 7864.320: 67.0365% ( 22) 00:07:36.300 7864.320 - 7914.732: 67.1547% ( 18) 00:07:36.300 7914.732 - 7965.145: 67.2794% ( 19) 00:07:36.300 7965.145 - 8015.557: 67.3845% ( 16) 00:07:36.300 8015.557 - 8065.969: 67.5092% ( 19) 00:07:36.300 8065.969 - 8116.382: 67.6142% ( 16) 00:07:36.300 8116.382 - 8166.794: 67.6996% ( 13) 00:07:36.300 8166.794 - 8217.206: 67.7784% ( 12) 00:07:36.300 8217.206 - 8267.618: 67.8768% ( 15) 00:07:36.300 8267.618 - 8318.031: 67.9753% ( 15) 00:07:36.300 8318.031 - 8368.443: 68.1066% ( 20) 00:07:36.300 8368.443 - 8418.855: 68.2511% ( 22) 00:07:36.300 8418.855 - 8469.268: 68.4086% ( 24) 00:07:36.300 8469.268 - 8519.680: 68.5924% ( 28) 00:07:36.300 8519.680 - 8570.092: 68.7106% ( 18) 00:07:36.300 8570.092 - 8620.505: 68.8944% ( 28) 00:07:36.300 8620.505 - 8670.917: 69.0257% ( 20) 00:07:36.300 8670.917 - 8721.329: 69.1570% ( 20) 00:07:36.300 8721.329 - 8771.742: 69.3015% ( 22) 00:07:36.300 8771.742 - 8822.154: 69.4262% ( 19) 00:07:36.300 8822.154 - 8872.566: 69.5706% ( 22) 00:07:36.300 8872.566 - 8922.978: 69.6954% ( 19) 00:07:36.300 8922.978 - 8973.391: 69.8070% ( 17) 00:07:36.300 8973.391 - 9023.803: 69.9449% ( 21) 00:07:36.300 9023.803 - 9074.215: 70.1090% ( 25) 00:07:36.300 9074.215 - 9124.628: 70.2862% ( 27) 00:07:36.300 9124.628 - 9175.040: 70.4504% ( 25) 00:07:36.300 9175.040 - 9225.452: 70.6342% ( 28) 00:07:36.300 9225.452 - 9275.865: 70.7852% ( 23) 00:07:36.300 9275.865 - 9326.277: 71.0084% ( 34) 00:07:36.300 9326.277 - 9376.689: 71.1988% ( 29) 00:07:36.300 9376.689 - 9427.102: 71.3695% ( 26) 00:07:36.300 9427.102 - 9477.514: 71.5664% ( 30) 00:07:36.301 9477.514 - 9527.926: 71.7371% ( 26) 00:07:36.301 9527.926 - 9578.338: 71.9078% ( 26) 00:07:36.301 9578.338 - 9628.751: 72.1704% ( 40) 00:07:36.301 9628.751 - 9679.163: 72.3543% ( 28) 00:07:36.301 9679.163 - 9729.575: 72.4921% ( 21) 00:07:36.301 9729.575 - 9779.988: 72.6366% ( 22) 00:07:36.301 9779.988 - 9830.400: 72.7941% ( 24) 00:07:36.301 9830.400 - 9880.812: 73.0042% ( 32) 00:07:36.301 9880.812 - 9931.225: 73.2471% ( 37) 00:07:36.301 9931.225 - 9981.637: 73.4703% ( 34) 00:07:36.301 9981.637 - 10032.049: 73.6870% ( 33) 00:07:36.301 10032.049 - 10082.462: 73.8445% ( 24) 00:07:36.301 10082.462 - 10132.874: 73.9955% ( 23) 00:07:36.301 10132.874 - 10183.286: 74.1662% ( 26) 00:07:36.301 10183.286 - 10233.698: 74.3041% ( 21) 00:07:36.301 10233.698 - 10284.111: 74.4551% ( 23) 00:07:36.301 10284.111 - 10334.523: 74.6061% ( 23) 00:07:36.301 10334.523 - 10384.935: 74.7571% ( 23) 00:07:36.301 10384.935 - 10435.348: 74.9212% ( 25) 00:07:36.301 10435.348 - 10485.760: 75.1444% ( 34) 00:07:36.301 10485.760 - 10536.172: 75.3545% ( 32) 00:07:36.301 10536.172 - 10586.585: 75.5252% ( 26) 00:07:36.301 10586.585 - 10636.997: 75.7222% ( 30) 00:07:36.301 10636.997 - 10687.409: 75.9585% ( 36) 00:07:36.301 10687.409 - 10737.822: 76.1883% ( 35) 00:07:36.301 10737.822 - 10788.234: 76.4575% ( 41) 00:07:36.301 10788.234 - 10838.646: 76.7660% ( 47) 00:07:36.301 10838.646 - 10889.058: 77.0352% ( 41) 00:07:36.301 10889.058 - 10939.471: 77.3634% ( 50) 00:07:36.301 10939.471 - 10989.883: 77.7180% ( 54) 00:07:36.301 10989.883 - 11040.295: 78.1578% ( 67) 00:07:36.301 11040.295 - 11090.708: 78.5255% ( 56) 00:07:36.301 11090.708 - 11141.120: 78.9391% ( 63) 00:07:36.301 11141.120 - 11191.532: 79.3395% ( 61) 00:07:36.301 11191.532 - 11241.945: 79.7860% ( 68) 00:07:36.301 11241.945 - 11292.357: 80.3506% ( 86) 00:07:36.301 11292.357 - 11342.769: 80.9874% ( 97) 00:07:36.301 11342.769 - 11393.182: 81.5060% ( 79) 00:07:36.301 11393.182 - 11443.594: 82.0706% ( 86) 00:07:36.301 11443.594 - 11494.006: 82.6549% ( 89) 00:07:36.301 11494.006 - 11544.418: 83.2327% ( 88) 00:07:36.301 11544.418 - 11594.831: 83.7776% ( 83) 00:07:36.301 11594.831 - 11645.243: 84.2962% ( 79) 00:07:36.301 11645.243 - 11695.655: 84.8280% ( 81) 00:07:36.301 11695.655 - 11746.068: 85.3401% ( 78) 00:07:36.301 11746.068 - 11796.480: 85.9375% ( 91) 00:07:36.301 11796.480 - 11846.892: 86.5087% ( 87) 00:07:36.301 11846.892 - 11897.305: 87.1324% ( 95) 00:07:36.301 11897.305 - 11947.717: 87.6707% ( 82) 00:07:36.301 11947.717 - 11998.129: 88.2156% ( 83) 00:07:36.301 11998.129 - 12048.542: 88.6686% ( 69) 00:07:36.301 12048.542 - 12098.954: 89.1216% ( 69) 00:07:36.301 12098.954 - 12149.366: 89.5221% ( 61) 00:07:36.301 12149.366 - 12199.778: 89.9225% ( 61) 00:07:36.301 12199.778 - 12250.191: 90.3099% ( 59) 00:07:36.301 12250.191 - 12300.603: 90.6447% ( 51) 00:07:36.301 12300.603 - 12351.015: 90.9270% ( 43) 00:07:36.301 12351.015 - 12401.428: 91.1699% ( 37) 00:07:36.301 12401.428 - 12451.840: 91.4194% ( 38) 00:07:36.301 12451.840 - 12502.252: 91.7411% ( 49) 00:07:36.301 12502.252 - 12552.665: 92.1153% ( 57) 00:07:36.301 12552.665 - 12603.077: 92.4370% ( 49) 00:07:36.301 12603.077 - 12653.489: 92.7390% ( 46) 00:07:36.301 12653.489 - 12703.902: 93.0607% ( 49) 00:07:36.301 12703.902 - 12754.314: 93.3561% ( 45) 00:07:36.301 12754.314 - 12804.726: 93.5924% ( 36) 00:07:36.301 12804.726 - 12855.138: 93.7697% ( 27) 00:07:36.301 12855.138 - 12905.551: 93.9666% ( 30) 00:07:36.301 12905.551 - 13006.375: 94.3868% ( 64) 00:07:36.301 13006.375 - 13107.200: 94.8004% ( 63) 00:07:36.301 13107.200 - 13208.025: 95.1943% ( 60) 00:07:36.301 13208.025 - 13308.849: 95.5882% ( 60) 00:07:36.301 13308.849 - 13409.674: 95.8968% ( 47) 00:07:36.301 13409.674 - 13510.498: 96.1003% ( 31) 00:07:36.301 13510.498 - 13611.323: 96.3761% ( 42) 00:07:36.301 13611.323 - 13712.148: 96.6452% ( 41) 00:07:36.301 13712.148 - 13812.972: 96.8816% ( 36) 00:07:36.301 13812.972 - 13913.797: 97.1704% ( 44) 00:07:36.301 13913.797 - 14014.622: 97.4527% ( 43) 00:07:36.301 14014.622 - 14115.446: 97.6956% ( 37) 00:07:36.301 14115.446 - 14216.271: 97.9582% ( 40) 00:07:36.301 14216.271 - 14317.095: 98.1815% ( 34) 00:07:36.301 14317.095 - 14417.920: 98.3718% ( 29) 00:07:36.301 14417.920 - 14518.745: 98.4835% ( 17) 00:07:36.301 14518.745 - 14619.569: 98.5622% ( 12) 00:07:36.301 14619.569 - 14720.394: 98.6345% ( 11) 00:07:36.301 14720.394 - 14821.218: 98.6607% ( 4) 00:07:36.301 14821.218 - 14922.043: 98.6935% ( 5) 00:07:36.301 14922.043 - 15022.868: 98.7592% ( 10) 00:07:36.301 15022.868 - 15123.692: 98.8051% ( 7) 00:07:36.301 15123.692 - 15224.517: 98.8314% ( 4) 00:07:36.301 15224.517 - 15325.342: 98.8642% ( 5) 00:07:36.301 15325.342 - 15426.166: 98.8905% ( 4) 00:07:36.301 15426.166 - 15526.991: 98.9233% ( 5) 00:07:36.301 15526.991 - 15627.815: 98.9561% ( 5) 00:07:36.301 15627.815 - 15728.640: 98.9890% ( 5) 00:07:36.301 15728.640 - 15829.465: 99.0218% ( 5) 00:07:36.301 15829.465 - 15930.289: 99.0481% ( 4) 00:07:36.301 15930.289 - 16031.114: 99.0743% ( 4) 00:07:36.301 16031.114 - 16131.938: 99.1071% ( 5) 00:07:36.301 16131.938 - 16232.763: 99.1400% ( 5) 00:07:36.301 16232.763 - 16333.588: 99.1597% ( 3) 00:07:36.301 23290.486 - 23391.311: 99.1662% ( 1) 00:07:36.301 23391.311 - 23492.135: 99.1991% ( 5) 00:07:36.301 23492.135 - 23592.960: 99.2450% ( 7) 00:07:36.301 23592.960 - 23693.785: 99.2844% ( 6) 00:07:36.301 23693.785 - 23794.609: 99.3304% ( 7) 00:07:36.301 23794.609 - 23895.434: 99.3697% ( 6) 00:07:36.301 23895.434 - 23996.258: 99.4091% ( 6) 00:07:36.301 23996.258 - 24097.083: 99.4420% ( 5) 00:07:36.301 24097.083 - 24197.908: 99.4879% ( 7) 00:07:36.301 24197.908 - 24298.732: 99.5339% ( 7) 00:07:36.301 24298.732 - 24399.557: 99.5733% ( 6) 00:07:36.301 24399.557 - 24500.382: 99.5798% ( 1) 00:07:36.301 28634.191 - 28835.840: 99.6061% ( 4) 00:07:36.301 28835.840 - 29037.489: 99.6914% ( 13) 00:07:36.301 29037.489 - 29239.138: 99.7637% ( 11) 00:07:36.301 29239.138 - 29440.788: 99.8424% ( 12) 00:07:36.301 29440.788 - 29642.437: 99.9147% ( 11) 00:07:36.301 29642.437 - 29844.086: 99.9934% ( 12) 00:07:36.301 29844.086 - 30045.735: 100.0000% ( 1) 00:07:36.301 00:07:36.301 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:36.301 ============================================================================== 00:07:36.301 Range in us Cumulative IO count 00:07:36.301 3327.212 - 3352.418: 0.0327% ( 5) 00:07:36.301 3352.418 - 3377.625: 0.0850% ( 8) 00:07:36.301 3377.625 - 3402.831: 0.1046% ( 3) 00:07:36.301 3402.831 - 3428.037: 0.1177% ( 2) 00:07:36.301 3453.243 - 3478.449: 0.1242% ( 1) 00:07:36.301 3478.449 - 3503.655: 0.1373% ( 2) 00:07:36.301 3503.655 - 3528.862: 0.1504% ( 2) 00:07:36.301 3528.862 - 3554.068: 0.1634% ( 2) 00:07:36.301 3554.068 - 3579.274: 0.1765% ( 2) 00:07:36.301 3579.274 - 3604.480: 0.1896% ( 2) 00:07:36.301 3604.480 - 3629.686: 0.2092% ( 3) 00:07:36.301 3629.686 - 3654.892: 0.2223% ( 2) 00:07:36.301 3654.892 - 3680.098: 0.2419% ( 3) 00:07:36.301 3680.098 - 3705.305: 0.2550% ( 2) 00:07:36.301 3705.305 - 3730.511: 0.2615% ( 1) 00:07:36.301 3730.511 - 3755.717: 0.2811% ( 3) 00:07:36.301 3755.717 - 3780.923: 0.2942% ( 2) 00:07:36.301 3780.923 - 3806.129: 0.3138% ( 3) 00:07:36.301 3806.129 - 3831.335: 0.3269% ( 2) 00:07:36.301 3831.335 - 3856.542: 0.3465% ( 3) 00:07:36.301 3856.542 - 3881.748: 0.3596% ( 2) 00:07:36.301 3881.748 - 3906.954: 0.3726% ( 2) 00:07:36.301 3906.954 - 3932.160: 0.3857% ( 2) 00:07:36.301 3932.160 - 3957.366: 0.4053% ( 3) 00:07:36.301 3957.366 - 3982.572: 0.4184% ( 2) 00:07:36.301 5016.025 - 5041.231: 0.4249% ( 1) 00:07:36.301 5041.231 - 5066.437: 0.4838% ( 9) 00:07:36.301 5066.437 - 5091.643: 0.4903% ( 1) 00:07:36.301 5091.643 - 5116.849: 0.5034% ( 2) 00:07:36.301 5116.849 - 5142.055: 0.5165% ( 2) 00:07:36.301 5142.055 - 5167.262: 0.5361% ( 3) 00:07:36.301 5167.262 - 5192.468: 0.5557% ( 3) 00:07:36.301 5192.468 - 5217.674: 0.5688% ( 2) 00:07:36.301 5217.674 - 5242.880: 0.5819% ( 2) 00:07:36.301 5242.880 - 5268.086: 0.6015% ( 3) 00:07:36.301 5268.086 - 5293.292: 0.6211% ( 3) 00:07:36.301 5293.292 - 5318.498: 0.6342% ( 2) 00:07:36.301 5318.498 - 5343.705: 0.6538% ( 3) 00:07:36.301 5343.705 - 5368.911: 0.6734% ( 3) 00:07:36.301 5368.911 - 5394.117: 0.6930% ( 3) 00:07:36.301 5394.117 - 5419.323: 0.7061% ( 2) 00:07:36.301 5419.323 - 5444.529: 0.7257% ( 3) 00:07:36.301 5444.529 - 5469.735: 0.7453% ( 3) 00:07:36.301 5469.735 - 5494.942: 0.7649% ( 3) 00:07:36.301 5494.942 - 5520.148: 0.7780% ( 2) 00:07:36.301 5520.148 - 5545.354: 0.7976% ( 3) 00:07:36.301 5545.354 - 5570.560: 0.8172% ( 3) 00:07:36.301 5570.560 - 5595.766: 0.8368% ( 3) 00:07:36.301 5999.065 - 6024.271: 0.8434% ( 1) 00:07:36.301 6024.271 - 6049.477: 0.8564% ( 2) 00:07:36.301 6049.477 - 6074.683: 0.8630% ( 1) 00:07:36.301 6074.683 - 6099.889: 0.8957% ( 5) 00:07:36.301 6099.889 - 6125.095: 0.9676% ( 11) 00:07:36.301 6125.095 - 6150.302: 1.1376% ( 26) 00:07:36.301 6150.302 - 6175.508: 1.3402% ( 31) 00:07:36.301 6175.508 - 6200.714: 1.7129% ( 57) 00:07:36.301 6200.714 - 6225.920: 2.2228% ( 78) 00:07:36.301 6225.920 - 6251.126: 2.9093% ( 105) 00:07:36.301 6251.126 - 6276.332: 3.6676% ( 116) 00:07:36.301 6276.332 - 6301.538: 4.9752% ( 200) 00:07:36.301 6301.538 - 6326.745: 6.4396% ( 224) 00:07:36.301 6326.745 - 6351.951: 7.9106% ( 225) 00:07:36.301 6351.951 - 6377.157: 9.6888% ( 272) 00:07:36.301 6377.157 - 6402.363: 11.6370% ( 298) 00:07:36.301 6402.363 - 6427.569: 13.4872% ( 283) 00:07:36.301 6427.569 - 6452.775: 15.2197% ( 265) 00:07:36.301 6452.775 - 6503.188: 18.5408% ( 508) 00:07:36.301 6503.188 - 6553.600: 22.0842% ( 542) 00:07:36.302 6553.600 - 6604.012: 25.7453% ( 560) 00:07:36.302 6604.012 - 6654.425: 29.5567% ( 583) 00:07:36.302 6654.425 - 6704.837: 33.3682% ( 583) 00:07:36.302 6704.837 - 6755.249: 37.3823% ( 614) 00:07:36.302 6755.249 - 6805.662: 41.1611% ( 578) 00:07:36.302 6805.662 - 6856.074: 44.9791% ( 584) 00:07:36.302 6856.074 - 6906.486: 48.6663% ( 564) 00:07:36.302 6906.486 - 6956.898: 52.1640% ( 535) 00:07:36.302 6956.898 - 7007.311: 55.2563% ( 473) 00:07:36.302 7007.311 - 7057.723: 57.9759% ( 416) 00:07:36.302 7057.723 - 7108.135: 60.0941% ( 324) 00:07:36.302 7108.135 - 7158.548: 61.6501% ( 238) 00:07:36.302 7158.548 - 7208.960: 62.6046% ( 146) 00:07:36.302 7208.960 - 7259.372: 63.2388% ( 97) 00:07:36.302 7259.372 - 7309.785: 63.7552% ( 79) 00:07:36.302 7309.785 - 7360.197: 64.2129% ( 70) 00:07:36.302 7360.197 - 7410.609: 64.6051% ( 60) 00:07:36.302 7410.609 - 7461.022: 64.9059% ( 46) 00:07:36.302 7461.022 - 7511.434: 65.2720% ( 56) 00:07:36.302 7511.434 - 7561.846: 65.5662% ( 45) 00:07:36.302 7561.846 - 7612.258: 65.7754% ( 32) 00:07:36.302 7612.258 - 7662.671: 65.9519% ( 27) 00:07:36.302 7662.671 - 7713.083: 66.1219% ( 26) 00:07:36.302 7713.083 - 7763.495: 66.2853% ( 25) 00:07:36.302 7763.495 - 7813.908: 66.4357% ( 23) 00:07:36.302 7813.908 - 7864.320: 66.6122% ( 27) 00:07:36.302 7864.320 - 7914.732: 66.8279% ( 33) 00:07:36.302 7914.732 - 7965.145: 67.0175% ( 29) 00:07:36.302 7965.145 - 8015.557: 67.2137% ( 30) 00:07:36.302 8015.557 - 8065.969: 67.3509% ( 21) 00:07:36.302 8065.969 - 8116.382: 67.4817% ( 20) 00:07:36.302 8116.382 - 8166.794: 67.5798% ( 15) 00:07:36.302 8166.794 - 8217.206: 67.7105% ( 20) 00:07:36.302 8217.206 - 8267.618: 67.8282% ( 18) 00:07:36.302 8267.618 - 8318.031: 67.9132% ( 13) 00:07:36.302 8318.031 - 8368.443: 68.0309% ( 18) 00:07:36.302 8368.443 - 8418.855: 68.1289% ( 15) 00:07:36.302 8418.855 - 8469.268: 68.2466% ( 18) 00:07:36.302 8469.268 - 8519.680: 68.3643% ( 18) 00:07:36.302 8519.680 - 8570.092: 68.4689% ( 16) 00:07:36.302 8570.092 - 8620.505: 68.5539% ( 13) 00:07:36.302 8620.505 - 8670.917: 68.6650% ( 17) 00:07:36.302 8670.917 - 8721.329: 68.8481% ( 28) 00:07:36.302 8721.329 - 8771.742: 68.9919% ( 22) 00:07:36.302 8771.742 - 8822.154: 69.1161% ( 19) 00:07:36.302 8822.154 - 8872.566: 69.2469% ( 20) 00:07:36.302 8872.566 - 8922.978: 69.4168% ( 26) 00:07:36.302 8922.978 - 8973.391: 69.6064% ( 29) 00:07:36.302 8973.391 - 9023.803: 69.8353% ( 35) 00:07:36.302 9023.803 - 9074.215: 70.0445% ( 32) 00:07:36.302 9074.215 - 9124.628: 70.3648% ( 49) 00:07:36.302 9124.628 - 9175.040: 70.5871% ( 34) 00:07:36.302 9175.040 - 9225.452: 70.7440% ( 24) 00:07:36.302 9225.452 - 9275.865: 70.9074% ( 25) 00:07:36.302 9275.865 - 9326.277: 71.0970% ( 29) 00:07:36.302 9326.277 - 9376.689: 71.2408% ( 22) 00:07:36.302 9376.689 - 9427.102: 71.3651% ( 19) 00:07:36.302 9427.102 - 9477.514: 71.4958% ( 20) 00:07:36.302 9477.514 - 9527.926: 71.6135% ( 18) 00:07:36.302 9527.926 - 9578.338: 71.7246% ( 17) 00:07:36.302 9578.338 - 9628.751: 71.9338% ( 32) 00:07:36.302 9628.751 - 9679.163: 72.0973% ( 25) 00:07:36.302 9679.163 - 9729.575: 72.2411% ( 22) 00:07:36.302 9729.575 - 9779.988: 72.4046% ( 25) 00:07:36.302 9779.988 - 9830.400: 72.5615% ( 24) 00:07:36.302 9830.400 - 9880.812: 72.6922% ( 20) 00:07:36.302 9880.812 - 9931.225: 72.8687% ( 27) 00:07:36.302 9931.225 - 9981.637: 73.0452% ( 27) 00:07:36.302 9981.637 - 10032.049: 73.2283% ( 28) 00:07:36.302 10032.049 - 10082.462: 73.4179% ( 29) 00:07:36.302 10082.462 - 10132.874: 73.5813% ( 25) 00:07:36.302 10132.874 - 10183.286: 73.7513% ( 26) 00:07:36.302 10183.286 - 10233.698: 73.9474% ( 30) 00:07:36.302 10233.698 - 10284.111: 74.1436% ( 30) 00:07:36.302 10284.111 - 10334.523: 74.3920% ( 38) 00:07:36.302 10334.523 - 10384.935: 74.5947% ( 31) 00:07:36.302 10384.935 - 10435.348: 74.8169% ( 34) 00:07:36.302 10435.348 - 10485.760: 74.9935% ( 27) 00:07:36.302 10485.760 - 10536.172: 75.2157% ( 34) 00:07:36.302 10536.172 - 10586.585: 75.4315% ( 33) 00:07:36.302 10586.585 - 10636.997: 75.6734% ( 37) 00:07:36.302 10636.997 - 10687.409: 75.9283% ( 39) 00:07:36.302 10687.409 - 10737.822: 76.2225% ( 45) 00:07:36.302 10737.822 - 10788.234: 76.5298% ( 47) 00:07:36.302 10788.234 - 10838.646: 76.8371% ( 47) 00:07:36.302 10838.646 - 10889.058: 77.2228% ( 59) 00:07:36.302 10889.058 - 10939.471: 77.6151% ( 60) 00:07:36.302 10939.471 - 10989.883: 78.0335% ( 64) 00:07:36.302 10989.883 - 11040.295: 78.5238% ( 75) 00:07:36.302 11040.295 - 11090.708: 79.0207% ( 76) 00:07:36.302 11090.708 - 11141.120: 79.5306% ( 78) 00:07:36.302 11141.120 - 11191.532: 80.0471% ( 79) 00:07:36.302 11191.532 - 11241.945: 80.6158% ( 87) 00:07:36.302 11241.945 - 11292.357: 81.1323% ( 79) 00:07:36.302 11292.357 - 11342.769: 81.6553% ( 80) 00:07:36.302 11342.769 - 11393.182: 82.1653% ( 78) 00:07:36.302 11393.182 - 11443.594: 82.7275% ( 86) 00:07:36.302 11443.594 - 11494.006: 83.3290% ( 92) 00:07:36.302 11494.006 - 11544.418: 83.9762% ( 99) 00:07:36.302 11544.418 - 11594.831: 84.5777% ( 92) 00:07:36.302 11594.831 - 11645.243: 85.2380% ( 101) 00:07:36.302 11645.243 - 11695.655: 85.8264% ( 90) 00:07:36.302 11695.655 - 11746.068: 86.3690% ( 83) 00:07:36.302 11746.068 - 11796.480: 86.8397% ( 72) 00:07:36.302 11796.480 - 11846.892: 87.2843% ( 68) 00:07:36.302 11846.892 - 11897.305: 87.6961% ( 63) 00:07:36.302 11897.305 - 11947.717: 88.1145% ( 64) 00:07:36.302 11947.717 - 11998.129: 88.4610% ( 53) 00:07:36.302 11998.129 - 12048.542: 88.8075% ( 53) 00:07:36.302 12048.542 - 12098.954: 89.1802% ( 57) 00:07:36.302 12098.954 - 12149.366: 89.5397% ( 55) 00:07:36.302 12149.366 - 12199.778: 89.8797% ( 52) 00:07:36.302 12199.778 - 12250.191: 90.1935% ( 48) 00:07:36.302 12250.191 - 12300.603: 90.4681% ( 42) 00:07:36.302 12300.603 - 12351.015: 90.7492% ( 43) 00:07:36.302 12351.015 - 12401.428: 91.0369% ( 44) 00:07:36.302 12401.428 - 12451.840: 91.2853% ( 38) 00:07:36.302 12451.840 - 12502.252: 91.5730% ( 44) 00:07:36.302 12502.252 - 12552.665: 91.8410% ( 41) 00:07:36.302 12552.665 - 12603.077: 92.1287% ( 44) 00:07:36.302 12603.077 - 12653.489: 92.4555% ( 50) 00:07:36.302 12653.489 - 12703.902: 92.7497% ( 45) 00:07:36.302 12703.902 - 12754.314: 93.0309% ( 43) 00:07:36.302 12754.314 - 12804.726: 93.3120% ( 43) 00:07:36.302 12804.726 - 12855.138: 93.5800% ( 41) 00:07:36.302 12855.138 - 12905.551: 93.8285% ( 38) 00:07:36.302 12905.551 - 13006.375: 94.2926% ( 71) 00:07:36.302 13006.375 - 13107.200: 94.7110% ( 64) 00:07:36.302 13107.200 - 13208.025: 95.1294% ( 64) 00:07:36.302 13208.025 - 13308.849: 95.4367% ( 47) 00:07:36.302 13308.849 - 13409.674: 95.7505% ( 48) 00:07:36.302 13409.674 - 13510.498: 96.2147% ( 71) 00:07:36.302 13510.498 - 13611.323: 96.5481% ( 51) 00:07:36.302 13611.323 - 13712.148: 96.8750% ( 50) 00:07:36.302 13712.148 - 13812.972: 97.1757% ( 46) 00:07:36.302 13812.972 - 13913.797: 97.5092% ( 51) 00:07:36.302 13913.797 - 14014.622: 97.7968% ( 44) 00:07:36.302 14014.622 - 14115.446: 98.0518% ( 39) 00:07:36.302 14115.446 - 14216.271: 98.3067% ( 39) 00:07:36.302 14216.271 - 14317.095: 98.4898% ( 28) 00:07:36.302 14317.095 - 14417.920: 98.5879% ( 15) 00:07:36.302 14417.920 - 14518.745: 98.6663% ( 12) 00:07:36.302 14518.745 - 14619.569: 98.7055% ( 6) 00:07:36.302 14619.569 - 14720.394: 98.7317% ( 4) 00:07:36.302 14720.394 - 14821.218: 98.7448% ( 2) 00:07:36.302 15224.517 - 15325.342: 98.7709% ( 4) 00:07:36.302 15325.342 - 15426.166: 98.7971% ( 4) 00:07:36.302 15426.166 - 15526.991: 98.8298% ( 5) 00:07:36.302 15526.991 - 15627.815: 98.8624% ( 5) 00:07:36.302 15627.815 - 15728.640: 98.8886% ( 4) 00:07:36.302 15728.640 - 15829.465: 98.9213% ( 5) 00:07:36.302 15829.465 - 15930.289: 98.9474% ( 4) 00:07:36.302 15930.289 - 16031.114: 98.9801% ( 5) 00:07:36.302 16031.114 - 16131.938: 99.0128% ( 5) 00:07:36.302 16131.938 - 16232.763: 99.0390% ( 4) 00:07:36.302 16232.763 - 16333.588: 99.0651% ( 4) 00:07:36.302 16333.588 - 16434.412: 99.0978% ( 5) 00:07:36.302 16434.412 - 16535.237: 99.1305% ( 5) 00:07:36.302 16535.237 - 16636.062: 99.1566% ( 4) 00:07:36.302 16636.062 - 16736.886: 99.1632% ( 1) 00:07:36.302 17341.834 - 17442.658: 99.1828% ( 3) 00:07:36.302 17442.658 - 17543.483: 99.2286% ( 7) 00:07:36.302 17543.483 - 17644.308: 99.2743% ( 7) 00:07:36.302 17644.308 - 17745.132: 99.3135% ( 6) 00:07:36.302 17745.132 - 17845.957: 99.3593% ( 7) 00:07:36.302 17845.957 - 17946.782: 99.3985% ( 6) 00:07:36.302 17946.782 - 18047.606: 99.4443% ( 7) 00:07:36.302 18047.606 - 18148.431: 99.4901% ( 7) 00:07:36.302 18148.431 - 18249.255: 99.5293% ( 6) 00:07:36.302 18249.255 - 18350.080: 99.5620% ( 5) 00:07:36.302 18350.080 - 18450.905: 99.5816% ( 3) 00:07:36.302 23794.609 - 23895.434: 99.5881% ( 1) 00:07:36.302 23895.434 - 23996.258: 99.6077% ( 3) 00:07:36.302 23996.258 - 24097.083: 99.6404% ( 5) 00:07:36.302 24097.083 - 24197.908: 99.6731% ( 5) 00:07:36.302 24197.908 - 24298.732: 99.6993% ( 4) 00:07:36.302 24298.732 - 24399.557: 99.7450% ( 7) 00:07:36.302 24399.557 - 24500.382: 99.7908% ( 7) 00:07:36.302 24500.382 - 24601.206: 99.8300% ( 6) 00:07:36.302 24601.206 - 24702.031: 99.8758% ( 7) 00:07:36.302 24702.031 - 24802.855: 99.9215% ( 7) 00:07:36.302 24802.855 - 24903.680: 99.9608% ( 6) 00:07:36.302 24903.680 - 25004.505: 100.0000% ( 6) 00:07:36.302 00:07:36.302 10:28:10 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:37.239 Initializing NVMe Controllers 00:07:37.239 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:37.239 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:37.239 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:37.239 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:37.239 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:37.239 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:37.239 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:37.239 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:37.239 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:37.239 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:37.239 Initialization complete. Launching workers. 00:07:37.239 ======================================================== 00:07:37.239 Latency(us) 00:07:37.239 Device Information : IOPS MiB/s Average min max 00:07:37.239 PCIE (0000:00:10.0) NSID 1 from core 0: 17437.06 204.34 7344.07 4705.32 20233.28 00:07:37.239 PCIE (0000:00:11.0) NSID 1 from core 0: 17437.06 204.34 7338.53 4588.53 19752.39 00:07:37.239 PCIE (0000:00:13.0) NSID 1 from core 0: 17437.06 204.34 7334.03 4333.87 19513.21 00:07:37.239 PCIE (0000:00:12.0) NSID 1 from core 0: 17437.06 204.34 7329.43 4011.65 19361.80 00:07:37.239 PCIE (0000:00:12.0) NSID 2 from core 0: 17437.06 204.34 7324.83 3591.36 19375.15 00:07:37.239 PCIE (0000:00:12.0) NSID 3 from core 0: 17437.06 204.34 7320.18 3354.41 19217.16 00:07:37.239 ======================================================== 00:07:37.239 Total : 104622.34 1226.04 7331.84 3354.41 20233.28 00:07:37.239 00:07:37.239 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:37.239 ================================================================================= 00:07:37.239 1.00000% : 5973.858us 00:07:37.239 10.00000% : 6402.363us 00:07:37.239 25.00000% : 6604.012us 00:07:37.239 50.00000% : 6906.486us 00:07:37.239 75.00000% : 7511.434us 00:07:37.239 90.00000% : 8872.566us 00:07:37.239 95.00000% : 10132.874us 00:07:37.239 98.00000% : 11695.655us 00:07:37.239 99.00000% : 12703.902us 00:07:37.239 99.50000% : 14720.394us 00:07:37.239 99.90000% : 20164.923us 00:07:37.239 99.99000% : 20265.748us 00:07:37.239 99.99900% : 20265.748us 00:07:37.239 99.99990% : 20265.748us 00:07:37.239 99.99999% : 20265.748us 00:07:37.239 00:07:37.239 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:37.239 ================================================================================= 00:07:37.239 1.00000% : 6074.683us 00:07:37.239 10.00000% : 6503.188us 00:07:37.239 25.00000% : 6654.425us 00:07:37.239 50.00000% : 6906.486us 00:07:37.239 75.00000% : 7461.022us 00:07:37.239 90.00000% : 8973.391us 00:07:37.239 95.00000% : 10082.462us 00:07:37.239 98.00000% : 11594.831us 00:07:37.239 99.00000% : 13409.674us 00:07:37.239 99.50000% : 14922.043us 00:07:37.239 99.90000% : 19559.975us 00:07:37.239 99.99000% : 19761.625us 00:07:37.239 99.99900% : 19761.625us 00:07:37.239 99.99990% : 19761.625us 00:07:37.239 99.99999% : 19761.625us 00:07:37.239 00:07:37.239 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:37.239 ================================================================================= 00:07:37.239 1.00000% : 5973.858us 00:07:37.239 10.00000% : 6427.569us 00:07:37.239 25.00000% : 6654.425us 00:07:37.239 50.00000% : 6906.486us 00:07:37.239 75.00000% : 7511.434us 00:07:37.239 90.00000% : 8822.154us 00:07:37.239 95.00000% : 10183.286us 00:07:37.239 98.00000% : 11342.769us 00:07:37.239 99.00000% : 13107.200us 00:07:37.239 99.50000% : 15325.342us 00:07:37.239 99.90000% : 19257.502us 00:07:37.239 99.99000% : 19559.975us 00:07:37.239 99.99900% : 19559.975us 00:07:37.239 99.99990% : 19559.975us 00:07:37.239 99.99999% : 19559.975us 00:07:37.239 00:07:37.239 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:37.239 ================================================================================= 00:07:37.239 1.00000% : 5973.858us 00:07:37.239 10.00000% : 6452.775us 00:07:37.239 25.00000% : 6654.425us 00:07:37.239 50.00000% : 6906.486us 00:07:37.239 75.00000% : 7511.434us 00:07:37.239 90.00000% : 8822.154us 00:07:37.239 95.00000% : 10183.286us 00:07:37.239 98.00000% : 11191.532us 00:07:37.239 99.00000% : 13308.849us 00:07:37.239 99.50000% : 15022.868us 00:07:37.239 99.90000% : 19055.852us 00:07:37.239 99.99000% : 19358.326us 00:07:37.239 99.99900% : 19459.151us 00:07:37.239 99.99990% : 19459.151us 00:07:37.239 99.99999% : 19459.151us 00:07:37.239 00:07:37.239 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:37.239 ================================================================================= 00:07:37.239 1.00000% : 5999.065us 00:07:37.239 10.00000% : 6452.775us 00:07:37.239 25.00000% : 6654.425us 00:07:37.239 50.00000% : 6906.486us 00:07:37.239 75.00000% : 7461.022us 00:07:37.239 90.00000% : 8822.154us 00:07:37.239 95.00000% : 10132.874us 00:07:37.239 98.00000% : 11191.532us 00:07:37.239 99.00000% : 13308.849us 00:07:37.239 99.50000% : 14922.043us 00:07:37.239 99.90000% : 18854.203us 00:07:37.239 99.99000% : 19358.326us 00:07:37.239 99.99900% : 19459.151us 00:07:37.239 99.99990% : 19459.151us 00:07:37.239 99.99999% : 19459.151us 00:07:37.240 00:07:37.240 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:37.240 ================================================================================= 00:07:37.240 1.00000% : 5973.858us 00:07:37.240 10.00000% : 6503.188us 00:07:37.240 25.00000% : 6654.425us 00:07:37.240 50.00000% : 6906.486us 00:07:37.240 75.00000% : 7461.022us 00:07:37.240 90.00000% : 8771.742us 00:07:37.240 95.00000% : 10032.049us 00:07:37.240 98.00000% : 11544.418us 00:07:37.240 99.00000% : 12552.665us 00:07:37.240 99.50000% : 14518.745us 00:07:37.240 99.90000% : 18753.378us 00:07:37.240 99.99000% : 19156.677us 00:07:37.240 99.99900% : 19257.502us 00:07:37.240 99.99990% : 19257.502us 00:07:37.240 99.99999% : 19257.502us 00:07:37.240 00:07:37.240 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:37.240 ============================================================================== 00:07:37.240 Range in us Cumulative IO count 00:07:37.240 4688.345 - 4713.551: 0.0057% ( 1) 00:07:37.240 4789.169 - 4814.375: 0.0343% ( 5) 00:07:37.240 4814.375 - 4839.582: 0.0458% ( 2) 00:07:37.240 4839.582 - 4864.788: 0.0572% ( 2) 00:07:37.240 4864.788 - 4889.994: 0.0630% ( 1) 00:07:37.240 4889.994 - 4915.200: 0.0744% ( 2) 00:07:37.240 4915.200 - 4940.406: 0.0801% ( 1) 00:07:37.240 4940.406 - 4965.612: 0.1145% ( 6) 00:07:37.240 4965.612 - 4990.818: 0.1774% ( 11) 00:07:37.240 4990.818 - 5016.025: 0.2118% ( 6) 00:07:37.240 5016.025 - 5041.231: 0.2518% ( 7) 00:07:37.240 5041.231 - 5066.437: 0.2690% ( 3) 00:07:37.240 5066.437 - 5091.643: 0.2747% ( 1) 00:07:37.240 5091.643 - 5116.849: 0.2804% ( 1) 00:07:37.240 5116.849 - 5142.055: 0.2862% ( 1) 00:07:37.240 5142.055 - 5167.262: 0.2919% ( 1) 00:07:37.240 5167.262 - 5192.468: 0.3033% ( 2) 00:07:37.240 5192.468 - 5217.674: 0.3148% ( 2) 00:07:37.240 5217.674 - 5242.880: 0.3262% ( 2) 00:07:37.240 5268.086 - 5293.292: 0.3377% ( 2) 00:07:37.240 5293.292 - 5318.498: 0.3491% ( 2) 00:07:37.240 5368.911 - 5394.117: 0.3549% ( 1) 00:07:37.240 5394.117 - 5419.323: 0.3663% ( 2) 00:07:37.240 5570.560 - 5595.766: 0.3720% ( 1) 00:07:37.240 5595.766 - 5620.972: 0.3835% ( 2) 00:07:37.240 5620.972 - 5646.178: 0.3892% ( 1) 00:07:37.240 5646.178 - 5671.385: 0.4006% ( 2) 00:07:37.240 5671.385 - 5696.591: 0.4064% ( 1) 00:07:37.240 5696.591 - 5721.797: 0.4178% ( 2) 00:07:37.240 5721.797 - 5747.003: 0.4235% ( 1) 00:07:37.240 5747.003 - 5772.209: 0.4293% ( 1) 00:07:37.240 5772.209 - 5797.415: 0.4350% ( 1) 00:07:37.240 5797.415 - 5822.622: 0.4522% ( 3) 00:07:37.240 5822.622 - 5847.828: 0.4693% ( 3) 00:07:37.240 5847.828 - 5873.034: 0.5208% ( 9) 00:07:37.240 5873.034 - 5898.240: 0.6410% ( 21) 00:07:37.240 5898.240 - 5923.446: 0.8070% ( 29) 00:07:37.240 5923.446 - 5948.652: 0.9730% ( 29) 00:07:37.240 5948.652 - 5973.858: 1.1447% ( 30) 00:07:37.240 5973.858 - 5999.065: 1.3908% ( 43) 00:07:37.240 5999.065 - 6024.271: 1.5911% ( 35) 00:07:37.240 6024.271 - 6049.477: 1.8086% ( 38) 00:07:37.240 6049.477 - 6074.683: 2.0891% ( 49) 00:07:37.240 6074.683 - 6099.889: 2.3352% ( 43) 00:07:37.240 6099.889 - 6125.095: 2.6671% ( 58) 00:07:37.240 6125.095 - 6150.302: 2.9304% ( 46) 00:07:37.240 6150.302 - 6175.508: 3.2624% ( 58) 00:07:37.240 6175.508 - 6200.714: 3.8004% ( 94) 00:07:37.240 6200.714 - 6225.920: 4.3899% ( 103) 00:07:37.240 6225.920 - 6251.126: 4.9908% ( 105) 00:07:37.240 6251.126 - 6276.332: 5.7406% ( 131) 00:07:37.240 6276.332 - 6301.538: 6.4904% ( 131) 00:07:37.240 6301.538 - 6326.745: 7.3775% ( 155) 00:07:37.240 6326.745 - 6351.951: 8.1502% ( 135) 00:07:37.240 6351.951 - 6377.157: 9.3750% ( 214) 00:07:37.240 6377.157 - 6402.363: 10.7887% ( 247) 00:07:37.240 6402.363 - 6427.569: 12.3569% ( 274) 00:07:37.240 6427.569 - 6452.775: 14.2685% ( 334) 00:07:37.240 6452.775 - 6503.188: 18.0288% ( 657) 00:07:37.240 6503.188 - 6553.600: 21.6690% ( 636) 00:07:37.240 6553.600 - 6604.012: 25.1889% ( 615) 00:07:37.240 6604.012 - 6654.425: 28.7088% ( 615) 00:07:37.240 6654.425 - 6704.837: 32.8297% ( 720) 00:07:37.240 6704.837 - 6755.249: 37.0307% ( 734) 00:07:37.240 6755.249 - 6805.662: 41.5121% ( 783) 00:07:37.240 6805.662 - 6856.074: 45.8677% ( 761) 00:07:37.240 6856.074 - 6906.486: 50.0229% ( 726) 00:07:37.240 6906.486 - 6956.898: 53.7717% ( 655) 00:07:37.240 6956.898 - 7007.311: 56.9540% ( 556) 00:07:37.240 7007.311 - 7057.723: 59.9874% ( 530) 00:07:37.240 7057.723 - 7108.135: 62.4199% ( 425) 00:07:37.240 7108.135 - 7158.548: 64.8867% ( 431) 00:07:37.240 7158.548 - 7208.960: 66.7983% ( 334) 00:07:37.240 7208.960 - 7259.372: 68.5726% ( 310) 00:07:37.240 7259.372 - 7309.785: 70.1522% ( 276) 00:07:37.240 7309.785 - 7360.197: 71.6518% ( 262) 00:07:37.240 7360.197 - 7410.609: 73.2143% ( 273) 00:07:37.240 7410.609 - 7461.022: 74.6051% ( 243) 00:07:37.240 7461.022 - 7511.434: 75.7154% ( 194) 00:07:37.240 7511.434 - 7561.846: 76.8601% ( 200) 00:07:37.240 7561.846 - 7612.258: 77.8903% ( 180) 00:07:37.240 7612.258 - 7662.671: 78.6973% ( 141) 00:07:37.240 7662.671 - 7713.083: 79.4700% ( 135) 00:07:37.240 7713.083 - 7763.495: 80.2083% ( 129) 00:07:37.240 7763.495 - 7813.908: 81.1584% ( 166) 00:07:37.240 7813.908 - 7864.320: 82.0112% ( 149) 00:07:37.240 7864.320 - 7914.732: 82.6122% ( 105) 00:07:37.240 7914.732 - 7965.145: 83.1674% ( 97) 00:07:37.240 7965.145 - 8015.557: 83.6710% ( 88) 00:07:37.240 8015.557 - 8065.969: 84.4780% ( 141) 00:07:37.240 8065.969 - 8116.382: 85.0618% ( 102) 00:07:37.240 8116.382 - 8166.794: 85.5769% ( 90) 00:07:37.240 8166.794 - 8217.206: 86.0520% ( 83) 00:07:37.240 8217.206 - 8267.618: 86.4927% ( 77) 00:07:37.240 8267.618 - 8318.031: 86.9105% ( 73) 00:07:37.240 8318.031 - 8368.443: 87.3397% ( 75) 00:07:37.240 8368.443 - 8418.855: 87.7289% ( 68) 00:07:37.240 8418.855 - 8469.268: 88.2269% ( 87) 00:07:37.240 8469.268 - 8519.680: 88.5359% ( 54) 00:07:37.240 8519.680 - 8570.092: 88.7992% ( 46) 00:07:37.240 8570.092 - 8620.505: 89.0511% ( 44) 00:07:37.240 8620.505 - 8670.917: 89.3143% ( 46) 00:07:37.240 8670.917 - 8721.329: 89.4975% ( 32) 00:07:37.240 8721.329 - 8771.742: 89.7092% ( 37) 00:07:37.240 8771.742 - 8822.154: 89.9267% ( 38) 00:07:37.240 8822.154 - 8872.566: 90.1385% ( 37) 00:07:37.240 8872.566 - 8922.978: 90.3732% ( 41) 00:07:37.240 8922.978 - 8973.391: 90.5620% ( 33) 00:07:37.240 8973.391 - 9023.803: 90.8196% ( 45) 00:07:37.240 9023.803 - 9074.215: 90.8997% ( 14) 00:07:37.240 9074.215 - 9124.628: 91.0256% ( 22) 00:07:37.240 9124.628 - 9175.040: 91.2260% ( 35) 00:07:37.240 9175.040 - 9225.452: 91.4206% ( 34) 00:07:37.240 9225.452 - 9275.865: 91.5293% ( 19) 00:07:37.240 9275.865 - 9326.277: 91.6266% ( 17) 00:07:37.240 9326.277 - 9376.689: 91.8326% ( 36) 00:07:37.240 9376.689 - 9427.102: 91.9700% ( 24) 00:07:37.240 9427.102 - 9477.514: 92.0559% ( 15) 00:07:37.240 9477.514 - 9527.926: 92.2619% ( 36) 00:07:37.240 9527.926 - 9578.338: 92.4336% ( 30) 00:07:37.240 9578.338 - 9628.751: 92.8056% ( 65) 00:07:37.240 9628.751 - 9679.163: 93.1033% ( 52) 00:07:37.240 9679.163 - 9729.575: 93.3894% ( 50) 00:07:37.240 9729.575 - 9779.988: 93.6355% ( 43) 00:07:37.240 9779.988 - 9830.400: 93.7843% ( 26) 00:07:37.240 9830.400 - 9880.812: 93.9675% ( 32) 00:07:37.240 9880.812 - 9931.225: 94.2995% ( 58) 00:07:37.240 9931.225 - 9981.637: 94.5341% ( 41) 00:07:37.240 9981.637 - 10032.049: 94.7115% ( 31) 00:07:37.240 10032.049 - 10082.462: 94.8947% ( 32) 00:07:37.240 10082.462 - 10132.874: 95.0721% ( 31) 00:07:37.240 10132.874 - 10183.286: 95.2266% ( 27) 00:07:37.240 10183.286 - 10233.698: 95.3812% ( 27) 00:07:37.240 10233.698 - 10284.111: 95.5643% ( 32) 00:07:37.240 10284.111 - 10334.523: 95.7131% ( 26) 00:07:37.240 10334.523 - 10384.935: 95.8677% ( 27) 00:07:37.240 10384.935 - 10435.348: 95.9879% ( 21) 00:07:37.240 10435.348 - 10485.760: 96.0852% ( 17) 00:07:37.240 10485.760 - 10536.172: 96.2225% ( 24) 00:07:37.240 10536.172 - 10586.585: 96.3084% ( 15) 00:07:37.240 10586.585 - 10636.997: 96.4286% ( 21) 00:07:37.240 10636.997 - 10687.409: 96.4858% ( 10) 00:07:37.240 10687.409 - 10737.822: 96.5316% ( 8) 00:07:37.240 10737.822 - 10788.234: 96.6003% ( 12) 00:07:37.240 10788.234 - 10838.646: 96.6575% ( 10) 00:07:37.240 10838.646 - 10889.058: 96.6861% ( 5) 00:07:37.240 10889.058 - 10939.471: 96.7262% ( 7) 00:07:37.240 10939.471 - 10989.883: 96.7605% ( 6) 00:07:37.240 10989.883 - 11040.295: 96.7949% ( 6) 00:07:37.240 11040.295 - 11090.708: 96.8521% ( 10) 00:07:37.240 11090.708 - 11141.120: 96.9093% ( 10) 00:07:37.240 11141.120 - 11191.532: 96.9666% ( 10) 00:07:37.240 11191.532 - 11241.945: 97.0181% ( 9) 00:07:37.240 11241.945 - 11292.357: 97.0925% ( 13) 00:07:37.240 11292.357 - 11342.769: 97.2585% ( 29) 00:07:37.240 11342.769 - 11393.182: 97.4016% ( 25) 00:07:37.240 11393.182 - 11443.594: 97.4760% ( 13) 00:07:37.240 11443.594 - 11494.006: 97.5332% ( 10) 00:07:37.240 11494.006 - 11544.418: 97.5962% ( 11) 00:07:37.240 11544.418 - 11594.831: 97.7450% ( 26) 00:07:37.240 11594.831 - 11645.243: 97.9396% ( 34) 00:07:37.240 11645.243 - 11695.655: 98.0025% ( 11) 00:07:37.240 11695.655 - 11746.068: 98.0884% ( 15) 00:07:37.240 11746.068 - 11796.480: 98.1571% ( 12) 00:07:37.240 11796.480 - 11846.892: 98.2315% ( 13) 00:07:37.240 11846.892 - 11897.305: 98.3001% ( 12) 00:07:37.240 11897.305 - 11947.717: 98.3631% ( 11) 00:07:37.240 11947.717 - 11998.129: 98.4089% ( 8) 00:07:37.240 11998.129 - 12048.542: 98.4604% ( 9) 00:07:37.240 12048.542 - 12098.954: 98.5005% ( 7) 00:07:37.240 12098.954 - 12149.366: 98.5577% ( 10) 00:07:37.240 12149.366 - 12199.778: 98.6321% ( 13) 00:07:37.240 12199.778 - 12250.191: 98.6951% ( 11) 00:07:37.240 12250.191 - 12300.603: 98.7351% ( 7) 00:07:37.241 12300.603 - 12351.015: 98.7809% ( 8) 00:07:37.241 12351.015 - 12401.428: 98.8152% ( 6) 00:07:37.241 12401.428 - 12451.840: 98.8439% ( 5) 00:07:37.241 12451.840 - 12502.252: 98.8954% ( 9) 00:07:37.241 12502.252 - 12552.665: 98.9354% ( 7) 00:07:37.241 12552.665 - 12603.077: 98.9641% ( 5) 00:07:37.241 12603.077 - 12653.489: 98.9927% ( 5) 00:07:37.241 12653.489 - 12703.902: 99.0385% ( 8) 00:07:37.241 12703.902 - 12754.314: 99.0785% ( 7) 00:07:37.241 12754.314 - 12804.726: 99.1186% ( 7) 00:07:37.241 12804.726 - 12855.138: 99.1300% ( 2) 00:07:37.241 12855.138 - 12905.551: 99.1472% ( 3) 00:07:37.241 12905.551 - 13006.375: 99.1701% ( 4) 00:07:37.241 13208.025 - 13308.849: 99.1873% ( 3) 00:07:37.241 13308.849 - 13409.674: 99.2044% ( 3) 00:07:37.241 13409.674 - 13510.498: 99.2159% ( 2) 00:07:37.241 13510.498 - 13611.323: 99.2331% ( 3) 00:07:37.241 13611.323 - 13712.148: 99.2388% ( 1) 00:07:37.241 13712.148 - 13812.972: 99.2502% ( 2) 00:07:37.241 13812.972 - 13913.797: 99.2788% ( 5) 00:07:37.241 13913.797 - 14014.622: 99.2846% ( 1) 00:07:37.241 14014.622 - 14115.446: 99.3017% ( 3) 00:07:37.241 14115.446 - 14216.271: 99.3132% ( 2) 00:07:37.241 14216.271 - 14317.095: 99.3647% ( 9) 00:07:37.241 14317.095 - 14417.920: 99.3876% ( 4) 00:07:37.241 14417.920 - 14518.745: 99.4391% ( 9) 00:07:37.241 14518.745 - 14619.569: 99.4734% ( 6) 00:07:37.241 14619.569 - 14720.394: 99.5135% ( 7) 00:07:37.241 14720.394 - 14821.218: 99.5364% ( 4) 00:07:37.241 14821.218 - 14922.043: 99.5536% ( 3) 00:07:37.241 14922.043 - 15022.868: 99.5822% ( 5) 00:07:37.241 15022.868 - 15123.692: 99.5994% ( 3) 00:07:37.241 15123.692 - 15224.517: 99.6108% ( 2) 00:07:37.241 15224.517 - 15325.342: 99.6223% ( 2) 00:07:37.241 15325.342 - 15426.166: 99.6337% ( 2) 00:07:37.241 18955.028 - 19055.852: 99.7253% ( 16) 00:07:37.241 19055.852 - 19156.677: 99.7424% ( 3) 00:07:37.241 19156.677 - 19257.502: 99.7596% ( 3) 00:07:37.241 19257.502 - 19358.326: 99.7653% ( 1) 00:07:37.241 19459.151 - 19559.975: 99.7711% ( 1) 00:07:37.241 19559.975 - 19660.800: 99.7768% ( 1) 00:07:37.241 19660.800 - 19761.625: 99.7940% ( 3) 00:07:37.241 19761.625 - 19862.449: 99.8168% ( 4) 00:07:37.241 19862.449 - 19963.274: 99.8283% ( 2) 00:07:37.241 19963.274 - 20064.098: 99.8684% ( 7) 00:07:37.241 20064.098 - 20164.923: 99.9428% ( 13) 00:07:37.241 20164.923 - 20265.748: 100.0000% ( 10) 00:07:37.241 00:07:37.241 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:37.241 ============================================================================== 00:07:37.241 Range in us Cumulative IO count 00:07:37.241 4587.520 - 4612.726: 0.0172% ( 3) 00:07:37.241 4612.726 - 4637.932: 0.0229% ( 1) 00:07:37.241 4637.932 - 4663.138: 0.0343% ( 2) 00:07:37.241 4663.138 - 4688.345: 0.0801% ( 8) 00:07:37.241 4688.345 - 4713.551: 0.1145% ( 6) 00:07:37.241 4713.551 - 4738.757: 0.1660% ( 9) 00:07:37.241 4738.757 - 4763.963: 0.2232% ( 10) 00:07:37.241 4763.963 - 4789.169: 0.3033% ( 14) 00:07:37.241 4789.169 - 4814.375: 0.3262% ( 4) 00:07:37.241 4814.375 - 4839.582: 0.3377% ( 2) 00:07:37.241 4839.582 - 4864.788: 0.3491% ( 2) 00:07:37.241 4864.788 - 4889.994: 0.3606% ( 2) 00:07:37.241 4889.994 - 4915.200: 0.3663% ( 1) 00:07:37.241 5847.828 - 5873.034: 0.3835% ( 3) 00:07:37.241 5873.034 - 5898.240: 0.3892% ( 1) 00:07:37.241 5898.240 - 5923.446: 0.4350% ( 8) 00:07:37.241 5923.446 - 5948.652: 0.4636% ( 5) 00:07:37.241 5948.652 - 5973.858: 0.4979% ( 6) 00:07:37.241 5973.858 - 5999.065: 0.5266% ( 5) 00:07:37.241 5999.065 - 6024.271: 0.6353% ( 19) 00:07:37.241 6024.271 - 6049.477: 0.7612% ( 22) 00:07:37.241 6049.477 - 6074.683: 1.0588% ( 52) 00:07:37.241 6074.683 - 6099.889: 1.4652% ( 71) 00:07:37.241 6099.889 - 6125.095: 1.8429% ( 66) 00:07:37.241 6125.095 - 6150.302: 2.3523% ( 89) 00:07:37.241 6150.302 - 6175.508: 2.6671% ( 55) 00:07:37.241 6175.508 - 6200.714: 2.8846% ( 38) 00:07:37.241 6200.714 - 6225.920: 3.1651% ( 49) 00:07:37.241 6225.920 - 6251.126: 3.8233% ( 115) 00:07:37.241 6251.126 - 6276.332: 4.3155% ( 86) 00:07:37.241 6276.332 - 6301.538: 4.6532% ( 59) 00:07:37.241 6301.538 - 6326.745: 5.0996% ( 78) 00:07:37.241 6326.745 - 6351.951: 5.7120% ( 107) 00:07:37.241 6351.951 - 6377.157: 6.4389% ( 127) 00:07:37.241 6377.157 - 6402.363: 7.1772% ( 129) 00:07:37.241 6402.363 - 6427.569: 8.5108% ( 233) 00:07:37.241 6427.569 - 6452.775: 9.9931% ( 259) 00:07:37.241 6452.775 - 6503.188: 13.1010% ( 543) 00:07:37.241 6503.188 - 6553.600: 18.0689% ( 868) 00:07:37.241 6553.600 - 6604.012: 23.9812% ( 1033) 00:07:37.241 6604.012 - 6654.425: 30.4487% ( 1130) 00:07:37.241 6654.425 - 6704.837: 34.6612% ( 736) 00:07:37.241 6704.837 - 6755.249: 39.2399% ( 800) 00:07:37.241 6755.249 - 6805.662: 42.9544% ( 649) 00:07:37.241 6805.662 - 6856.074: 47.2012% ( 742) 00:07:37.241 6856.074 - 6906.486: 51.3107% ( 718) 00:07:37.241 6906.486 - 6956.898: 55.3285% ( 702) 00:07:37.241 6956.898 - 7007.311: 58.3848% ( 534) 00:07:37.241 7007.311 - 7057.723: 60.7200% ( 408) 00:07:37.241 7057.723 - 7108.135: 63.7248% ( 525) 00:07:37.241 7108.135 - 7158.548: 65.8310% ( 368) 00:07:37.241 7158.548 - 7208.960: 67.8514% ( 353) 00:07:37.241 7208.960 - 7259.372: 69.7058% ( 324) 00:07:37.241 7259.372 - 7309.785: 71.6003% ( 331) 00:07:37.241 7309.785 - 7360.197: 73.0769% ( 258) 00:07:37.241 7360.197 - 7410.609: 74.2846% ( 211) 00:07:37.241 7410.609 - 7461.022: 75.4636% ( 206) 00:07:37.241 7461.022 - 7511.434: 76.3908% ( 162) 00:07:37.241 7511.434 - 7561.846: 77.8331% ( 252) 00:07:37.241 7561.846 - 7612.258: 78.9492% ( 195) 00:07:37.241 7612.258 - 7662.671: 79.7447% ( 139) 00:07:37.241 7662.671 - 7713.083: 80.8837% ( 199) 00:07:37.241 7713.083 - 7763.495: 81.3874% ( 88) 00:07:37.241 7763.495 - 7813.908: 81.7708% ( 67) 00:07:37.241 7813.908 - 7864.320: 82.1486% ( 66) 00:07:37.241 7864.320 - 7914.732: 82.6866% ( 94) 00:07:37.241 7914.732 - 7965.145: 83.2017% ( 90) 00:07:37.241 7965.145 - 8015.557: 83.8027% ( 105) 00:07:37.241 8015.557 - 8065.969: 84.3521% ( 96) 00:07:37.241 8065.969 - 8116.382: 85.1019% ( 131) 00:07:37.241 8116.382 - 8166.794: 85.9947% ( 156) 00:07:37.241 8166.794 - 8217.206: 86.4927% ( 87) 00:07:37.241 8217.206 - 8267.618: 86.7788% ( 50) 00:07:37.241 8267.618 - 8318.031: 87.1852% ( 71) 00:07:37.241 8318.031 - 8368.443: 87.5401% ( 62) 00:07:37.241 8368.443 - 8418.855: 88.2212% ( 119) 00:07:37.241 8418.855 - 8469.268: 88.6447% ( 74) 00:07:37.241 8469.268 - 8519.680: 88.9709% ( 57) 00:07:37.241 8519.680 - 8570.092: 89.1426% ( 30) 00:07:37.241 8570.092 - 8620.505: 89.2571% ( 20) 00:07:37.241 8620.505 - 8670.917: 89.3658% ( 19) 00:07:37.241 8670.917 - 8721.329: 89.4803% ( 20) 00:07:37.241 8721.329 - 8771.742: 89.5833% ( 18) 00:07:37.241 8771.742 - 8822.154: 89.6921% ( 19) 00:07:37.241 8822.154 - 8872.566: 89.7894% ( 17) 00:07:37.241 8872.566 - 8922.978: 89.9038% ( 20) 00:07:37.241 8922.978 - 8973.391: 90.0984% ( 34) 00:07:37.241 8973.391 - 9023.803: 90.2816% ( 32) 00:07:37.241 9023.803 - 9074.215: 90.5563% ( 48) 00:07:37.241 9074.215 - 9124.628: 90.8768% ( 56) 00:07:37.241 9124.628 - 9175.040: 91.1229% ( 43) 00:07:37.241 9175.040 - 9225.452: 91.4091% ( 50) 00:07:37.241 9225.452 - 9275.865: 91.7811% ( 65) 00:07:37.241 9275.865 - 9326.277: 91.9471% ( 29) 00:07:37.241 9326.277 - 9376.689: 92.1245% ( 31) 00:07:37.241 9376.689 - 9427.102: 92.4393% ( 55) 00:07:37.241 9427.102 - 9477.514: 92.7312% ( 51) 00:07:37.241 9477.514 - 9527.926: 92.9544% ( 39) 00:07:37.241 9527.926 - 9578.338: 93.2063% ( 44) 00:07:37.241 9578.338 - 9628.751: 93.3608% ( 27) 00:07:37.241 9628.751 - 9679.163: 93.5955% ( 41) 00:07:37.241 9679.163 - 9729.575: 93.7443% ( 26) 00:07:37.241 9729.575 - 9779.988: 93.9446% ( 35) 00:07:37.241 9779.988 - 9830.400: 94.2308% ( 50) 00:07:37.241 9830.400 - 9880.812: 94.3567% ( 22) 00:07:37.241 9880.812 - 9931.225: 94.5112% ( 27) 00:07:37.241 9931.225 - 9981.637: 94.7115% ( 35) 00:07:37.241 9981.637 - 10032.049: 94.8832% ( 30) 00:07:37.241 10032.049 - 10082.462: 95.1522% ( 47) 00:07:37.241 10082.462 - 10132.874: 95.4270% ( 48) 00:07:37.241 10132.874 - 10183.286: 95.5357% ( 19) 00:07:37.241 10183.286 - 10233.698: 95.6273% ( 16) 00:07:37.241 10233.698 - 10284.111: 95.7246% ( 17) 00:07:37.241 10284.111 - 10334.523: 95.8104% ( 15) 00:07:37.241 10334.523 - 10384.935: 95.9192% ( 19) 00:07:37.241 10384.935 - 10435.348: 96.0050% ( 15) 00:07:37.241 10435.348 - 10485.760: 96.0794% ( 13) 00:07:37.241 10485.760 - 10536.172: 96.1481% ( 12) 00:07:37.241 10536.172 - 10586.585: 96.1939% ( 8) 00:07:37.241 10586.585 - 10636.997: 96.2683% ( 13) 00:07:37.241 10636.997 - 10687.409: 96.4686% ( 35) 00:07:37.241 10687.409 - 10737.822: 96.5831% ( 20) 00:07:37.241 10737.822 - 10788.234: 96.6232% ( 7) 00:07:37.241 10788.234 - 10838.646: 96.6575% ( 6) 00:07:37.241 10838.646 - 10889.058: 96.6976% ( 7) 00:07:37.241 10889.058 - 10939.471: 96.7090% ( 2) 00:07:37.241 10989.883 - 11040.295: 96.7147% ( 1) 00:07:37.241 11191.532 - 11241.945: 96.7434% ( 5) 00:07:37.241 11241.945 - 11292.357: 96.8407% ( 17) 00:07:37.241 11292.357 - 11342.769: 97.0467% ( 36) 00:07:37.241 11342.769 - 11393.182: 97.1669% ( 21) 00:07:37.241 11393.182 - 11443.594: 97.4245% ( 45) 00:07:37.241 11443.594 - 11494.006: 97.6477% ( 39) 00:07:37.241 11494.006 - 11544.418: 97.9281% ( 49) 00:07:37.241 11544.418 - 11594.831: 98.1113% ( 32) 00:07:37.241 11594.831 - 11645.243: 98.2143% ( 18) 00:07:37.241 11645.243 - 11695.655: 98.3001% ( 15) 00:07:37.242 11695.655 - 11746.068: 98.3688% ( 12) 00:07:37.242 11746.068 - 11796.480: 98.4203% ( 9) 00:07:37.242 11796.480 - 11846.892: 98.4604% ( 7) 00:07:37.242 11846.892 - 11897.305: 98.4890% ( 5) 00:07:37.242 11897.305 - 11947.717: 98.5291% ( 7) 00:07:37.242 11947.717 - 11998.129: 98.5520% ( 4) 00:07:37.242 11998.129 - 12048.542: 98.5920% ( 7) 00:07:37.242 12048.542 - 12098.954: 98.6207% ( 5) 00:07:37.242 12098.954 - 12149.366: 98.6550% ( 6) 00:07:37.242 12149.366 - 12199.778: 98.6893% ( 6) 00:07:37.242 12199.778 - 12250.191: 98.7179% ( 5) 00:07:37.242 12250.191 - 12300.603: 98.7523% ( 6) 00:07:37.242 12300.603 - 12351.015: 98.7924% ( 7) 00:07:37.242 12351.015 - 12401.428: 98.8210% ( 5) 00:07:37.242 12401.428 - 12451.840: 98.8553% ( 6) 00:07:37.242 12451.840 - 12502.252: 98.8725% ( 3) 00:07:37.242 12502.252 - 12552.665: 98.8839% ( 2) 00:07:37.242 12552.665 - 12603.077: 98.9011% ( 3) 00:07:37.242 13107.200 - 13208.025: 98.9297% ( 5) 00:07:37.242 13208.025 - 13308.849: 98.9526% ( 4) 00:07:37.242 13308.849 - 13409.674: 99.1300% ( 31) 00:07:37.242 13409.674 - 13510.498: 99.1758% ( 8) 00:07:37.242 13510.498 - 13611.323: 99.1873% ( 2) 00:07:37.242 13611.323 - 13712.148: 99.2102% ( 4) 00:07:37.242 13712.148 - 13812.972: 99.2331% ( 4) 00:07:37.242 13812.972 - 13913.797: 99.2674% ( 6) 00:07:37.242 14417.920 - 14518.745: 99.3075% ( 7) 00:07:37.242 14518.745 - 14619.569: 99.3590% ( 9) 00:07:37.242 14619.569 - 14720.394: 99.4105% ( 9) 00:07:37.242 14720.394 - 14821.218: 99.4505% ( 7) 00:07:37.242 14821.218 - 14922.043: 99.5021% ( 9) 00:07:37.242 14922.043 - 15022.868: 99.5307% ( 5) 00:07:37.242 15022.868 - 15123.692: 99.5593% ( 5) 00:07:37.242 15123.692 - 15224.517: 99.5879% ( 5) 00:07:37.242 15224.517 - 15325.342: 99.6108% ( 4) 00:07:37.242 15325.342 - 15426.166: 99.6337% ( 4) 00:07:37.242 18854.203 - 18955.028: 99.6451% ( 2) 00:07:37.242 18955.028 - 19055.852: 99.7024% ( 10) 00:07:37.242 19055.852 - 19156.677: 99.8226% ( 21) 00:07:37.242 19156.677 - 19257.502: 99.8512% ( 5) 00:07:37.242 19257.502 - 19358.326: 99.8569% ( 1) 00:07:37.242 19358.326 - 19459.151: 99.8798% ( 4) 00:07:37.242 19459.151 - 19559.975: 99.9199% ( 7) 00:07:37.242 19559.975 - 19660.800: 99.9599% ( 7) 00:07:37.242 19660.800 - 19761.625: 100.0000% ( 7) 00:07:37.242 00:07:37.242 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:37.242 ============================================================================== 00:07:37.242 Range in us Cumulative IO count 00:07:37.242 4310.252 - 4335.458: 0.0057% ( 1) 00:07:37.242 4335.458 - 4360.665: 0.0172% ( 2) 00:07:37.242 4360.665 - 4385.871: 0.0687% ( 9) 00:07:37.242 4385.871 - 4411.077: 0.1660% ( 17) 00:07:37.242 4411.077 - 4436.283: 0.2289% ( 11) 00:07:37.242 4436.283 - 4461.489: 0.2576% ( 5) 00:07:37.242 4461.489 - 4486.695: 0.2690% ( 2) 00:07:37.242 4486.695 - 4511.902: 0.2804% ( 2) 00:07:37.242 4511.902 - 4537.108: 0.2919% ( 2) 00:07:37.242 4537.108 - 4562.314: 0.3033% ( 2) 00:07:37.242 4562.314 - 4587.520: 0.3148% ( 2) 00:07:37.242 4587.520 - 4612.726: 0.3262% ( 2) 00:07:37.242 4612.726 - 4637.932: 0.3377% ( 2) 00:07:37.242 4637.932 - 4663.138: 0.3434% ( 1) 00:07:37.242 4663.138 - 4688.345: 0.3549% ( 2) 00:07:37.242 4688.345 - 4713.551: 0.3663% ( 2) 00:07:37.242 5570.560 - 5595.766: 0.3777% ( 2) 00:07:37.242 5595.766 - 5620.972: 0.3892% ( 2) 00:07:37.242 5620.972 - 5646.178: 0.4006% ( 2) 00:07:37.242 5646.178 - 5671.385: 0.4121% ( 2) 00:07:37.242 5671.385 - 5696.591: 0.4293% ( 3) 00:07:37.242 5696.591 - 5721.797: 0.4464% ( 3) 00:07:37.242 5721.797 - 5747.003: 0.4693% ( 4) 00:07:37.242 5747.003 - 5772.209: 0.4865% ( 3) 00:07:37.242 5772.209 - 5797.415: 0.5323% ( 8) 00:07:37.242 5797.415 - 5822.622: 0.6296% ( 17) 00:07:37.242 5822.622 - 5847.828: 0.6639% ( 6) 00:07:37.242 5847.828 - 5873.034: 0.7154% ( 9) 00:07:37.242 5873.034 - 5898.240: 0.7555% ( 7) 00:07:37.242 5898.240 - 5923.446: 0.8242% ( 12) 00:07:37.242 5923.446 - 5948.652: 0.9329% ( 19) 00:07:37.242 5948.652 - 5973.858: 1.0417% ( 19) 00:07:37.242 5973.858 - 5999.065: 1.1848% ( 25) 00:07:37.242 5999.065 - 6024.271: 1.3565% ( 30) 00:07:37.242 6024.271 - 6049.477: 1.5568% ( 35) 00:07:37.242 6049.477 - 6074.683: 1.7342% ( 31) 00:07:37.242 6074.683 - 6099.889: 1.9402% ( 36) 00:07:37.242 6099.889 - 6125.095: 2.2150% ( 48) 00:07:37.242 6125.095 - 6150.302: 2.5355% ( 56) 00:07:37.242 6150.302 - 6175.508: 2.8903% ( 62) 00:07:37.242 6175.508 - 6200.714: 3.1880% ( 52) 00:07:37.242 6200.714 - 6225.920: 3.6229% ( 76) 00:07:37.242 6225.920 - 6251.126: 4.0808% ( 80) 00:07:37.242 6251.126 - 6276.332: 4.5387% ( 80) 00:07:37.242 6276.332 - 6301.538: 5.0767% ( 94) 00:07:37.242 6301.538 - 6326.745: 5.7177% ( 112) 00:07:37.242 6326.745 - 6351.951: 6.4618% ( 130) 00:07:37.242 6351.951 - 6377.157: 7.6351% ( 205) 00:07:37.242 6377.157 - 6402.363: 8.7855% ( 201) 00:07:37.242 6402.363 - 6427.569: 10.0103% ( 214) 00:07:37.242 6427.569 - 6452.775: 11.8075% ( 314) 00:07:37.242 6452.775 - 6503.188: 15.2129% ( 595) 00:07:37.242 6503.188 - 6553.600: 19.3395% ( 721) 00:07:37.242 6553.600 - 6604.012: 24.3189% ( 870) 00:07:37.242 6604.012 - 6654.425: 29.0636% ( 829) 00:07:37.242 6654.425 - 6704.837: 33.6195% ( 796) 00:07:37.242 6704.837 - 6755.249: 37.8549% ( 740) 00:07:37.242 6755.249 - 6805.662: 42.2562% ( 769) 00:07:37.242 6805.662 - 6856.074: 46.5373% ( 748) 00:07:37.242 6856.074 - 6906.486: 51.1561% ( 807) 00:07:37.242 6906.486 - 6956.898: 54.5101% ( 586) 00:07:37.242 6956.898 - 7007.311: 57.4462% ( 513) 00:07:37.242 7007.311 - 7057.723: 60.4338% ( 522) 00:07:37.242 7057.723 - 7108.135: 62.6946% ( 395) 00:07:37.242 7108.135 - 7158.548: 64.9439% ( 393) 00:07:37.242 7158.548 - 7208.960: 66.6724% ( 302) 00:07:37.242 7208.960 - 7259.372: 68.3322% ( 290) 00:07:37.242 7259.372 - 7309.785: 70.3812% ( 358) 00:07:37.242 7309.785 - 7360.197: 71.9952% ( 282) 00:07:37.242 7360.197 - 7410.609: 73.5691% ( 275) 00:07:37.242 7410.609 - 7461.022: 74.8913% ( 231) 00:07:37.242 7461.022 - 7511.434: 76.0932% ( 210) 00:07:37.242 7511.434 - 7561.846: 76.9345% ( 147) 00:07:37.242 7561.846 - 7612.258: 77.9190% ( 172) 00:07:37.242 7612.258 - 7662.671: 78.8519% ( 163) 00:07:37.242 7662.671 - 7713.083: 79.7161% ( 151) 00:07:37.242 7713.083 - 7763.495: 80.2141% ( 87) 00:07:37.242 7763.495 - 7813.908: 81.1413% ( 162) 00:07:37.242 7813.908 - 7864.320: 81.7136% ( 100) 00:07:37.242 7864.320 - 7914.732: 82.2344% ( 91) 00:07:37.242 7914.732 - 7965.145: 82.8526% ( 108) 00:07:37.242 7965.145 - 8015.557: 83.2589% ( 71) 00:07:37.242 8015.557 - 8065.969: 83.8313% ( 100) 00:07:37.242 8065.969 - 8116.382: 84.2949% ( 81) 00:07:37.242 8116.382 - 8166.794: 84.9702% ( 118) 00:07:37.242 8166.794 - 8217.206: 85.6799% ( 124) 00:07:37.242 8217.206 - 8267.618: 86.2866% ( 106) 00:07:37.242 8267.618 - 8318.031: 86.8533% ( 99) 00:07:37.242 8318.031 - 8368.443: 87.5801% ( 127) 00:07:37.242 8368.443 - 8418.855: 88.0037% ( 74) 00:07:37.242 8418.855 - 8469.268: 88.3242% ( 56) 00:07:37.242 8469.268 - 8519.680: 88.7935% ( 82) 00:07:37.242 8519.680 - 8570.092: 89.0511% ( 45) 00:07:37.242 8570.092 - 8620.505: 89.2170% ( 29) 00:07:37.242 8620.505 - 8670.917: 89.3773% ( 28) 00:07:37.242 8670.917 - 8721.329: 89.5891% ( 37) 00:07:37.242 8721.329 - 8771.742: 89.8466% ( 45) 00:07:37.242 8771.742 - 8822.154: 90.0469% ( 35) 00:07:37.242 8822.154 - 8872.566: 90.4190% ( 65) 00:07:37.242 8872.566 - 8922.978: 90.7566% ( 59) 00:07:37.242 8922.978 - 8973.391: 91.0314% ( 48) 00:07:37.242 8973.391 - 9023.803: 91.2717% ( 42) 00:07:37.242 9023.803 - 9074.215: 91.4721% ( 35) 00:07:37.242 9074.215 - 9124.628: 91.6380% ( 29) 00:07:37.242 9124.628 - 9175.040: 91.8326% ( 34) 00:07:37.242 9175.040 - 9225.452: 92.0902% ( 45) 00:07:37.242 9225.452 - 9275.865: 92.4222% ( 58) 00:07:37.242 9275.865 - 9326.277: 92.5881% ( 29) 00:07:37.242 9326.277 - 9376.689: 92.7942% ( 36) 00:07:37.242 9376.689 - 9427.102: 92.9487% ( 27) 00:07:37.242 9427.102 - 9477.514: 93.0975% ( 26) 00:07:37.242 9477.514 - 9527.926: 93.4180% ( 56) 00:07:37.242 9527.926 - 9578.338: 93.7214% ( 53) 00:07:37.242 9578.338 - 9628.751: 93.9389% ( 38) 00:07:37.242 9628.751 - 9679.163: 94.0419% ( 18) 00:07:37.242 9679.163 - 9729.575: 94.1449% ( 18) 00:07:37.242 9729.575 - 9779.988: 94.2708% ( 22) 00:07:37.242 9779.988 - 9830.400: 94.4082% ( 24) 00:07:37.242 9830.400 - 9880.812: 94.4826% ( 13) 00:07:37.242 9880.812 - 9931.225: 94.5456% ( 11) 00:07:37.242 9931.225 - 9981.637: 94.6829% ( 24) 00:07:37.242 9981.637 - 10032.049: 94.7459% ( 11) 00:07:37.242 10032.049 - 10082.462: 94.8432% ( 17) 00:07:37.242 10082.462 - 10132.874: 94.9176% ( 13) 00:07:37.242 10132.874 - 10183.286: 95.0321% ( 20) 00:07:37.242 10183.286 - 10233.698: 95.2438% ( 37) 00:07:37.242 10233.698 - 10284.111: 95.3984% ( 27) 00:07:37.242 10284.111 - 10334.523: 95.6216% ( 39) 00:07:37.242 10334.523 - 10384.935: 95.7704% ( 26) 00:07:37.242 10384.935 - 10435.348: 95.8620% ( 16) 00:07:37.242 10435.348 - 10485.760: 95.9764% ( 20) 00:07:37.242 10485.760 - 10536.172: 96.0565% ( 14) 00:07:37.242 10536.172 - 10586.585: 96.0909% ( 6) 00:07:37.242 10586.585 - 10636.997: 96.1252% ( 6) 00:07:37.242 10636.997 - 10687.409: 96.1767% ( 9) 00:07:37.242 10687.409 - 10737.822: 96.2283% ( 9) 00:07:37.242 10737.822 - 10788.234: 96.2912% ( 11) 00:07:37.242 10788.234 - 10838.646: 96.3942% ( 18) 00:07:37.242 10838.646 - 10889.058: 96.5259% ( 23) 00:07:37.242 10889.058 - 10939.471: 96.7033% ( 31) 00:07:37.242 10939.471 - 10989.883: 96.8521% ( 26) 00:07:37.242 10989.883 - 11040.295: 96.9723% ( 21) 00:07:37.242 11040.295 - 11090.708: 97.1154% ( 25) 00:07:37.242 11090.708 - 11141.120: 97.2585% ( 25) 00:07:37.243 11141.120 - 11191.532: 97.3844% ( 22) 00:07:37.243 11191.532 - 11241.945: 97.5733% ( 33) 00:07:37.243 11241.945 - 11292.357: 97.9109% ( 59) 00:07:37.243 11292.357 - 11342.769: 98.0655% ( 27) 00:07:37.243 11342.769 - 11393.182: 98.1513% ( 15) 00:07:37.243 11393.182 - 11443.594: 98.2429% ( 16) 00:07:37.243 11443.594 - 11494.006: 98.3345% ( 16) 00:07:37.243 11494.006 - 11544.418: 98.4146% ( 14) 00:07:37.243 11544.418 - 11594.831: 98.5005% ( 15) 00:07:37.243 11594.831 - 11645.243: 98.6836% ( 32) 00:07:37.243 11645.243 - 11695.655: 98.7408% ( 10) 00:07:37.243 11695.655 - 11746.068: 98.7924% ( 9) 00:07:37.243 11746.068 - 11796.480: 98.8267% ( 6) 00:07:37.243 11796.480 - 11846.892: 98.8668% ( 7) 00:07:37.243 11846.892 - 11897.305: 98.8897% ( 4) 00:07:37.243 11897.305 - 11947.717: 98.9011% ( 2) 00:07:37.243 12905.551 - 13006.375: 98.9297% ( 5) 00:07:37.243 13006.375 - 13107.200: 99.0156% ( 15) 00:07:37.243 13107.200 - 13208.025: 99.1758% ( 28) 00:07:37.243 13208.025 - 13308.849: 99.2560% ( 14) 00:07:37.243 13308.849 - 13409.674: 99.2674% ( 2) 00:07:37.243 14720.394 - 14821.218: 99.2846% ( 3) 00:07:37.243 14821.218 - 14922.043: 99.3304% ( 8) 00:07:37.243 14922.043 - 15022.868: 99.3819% ( 9) 00:07:37.243 15022.868 - 15123.692: 99.4219% ( 7) 00:07:37.243 15123.692 - 15224.517: 99.4620% ( 7) 00:07:37.243 15224.517 - 15325.342: 99.5021% ( 7) 00:07:37.243 15325.342 - 15426.166: 99.5364% ( 6) 00:07:37.243 15426.166 - 15526.991: 99.5707% ( 6) 00:07:37.243 15526.991 - 15627.815: 99.5994% ( 5) 00:07:37.243 15627.815 - 15728.640: 99.6337% ( 6) 00:07:37.243 18753.378 - 18854.203: 99.6623% ( 5) 00:07:37.243 18854.203 - 18955.028: 99.6967% ( 6) 00:07:37.243 18955.028 - 19055.852: 99.7768% ( 14) 00:07:37.243 19055.852 - 19156.677: 99.8684% ( 16) 00:07:37.243 19156.677 - 19257.502: 99.9027% ( 6) 00:07:37.243 19257.502 - 19358.326: 99.9370% ( 6) 00:07:37.243 19358.326 - 19459.151: 99.9714% ( 6) 00:07:37.243 19459.151 - 19559.975: 100.0000% ( 5) 00:07:37.243 00:07:37.243 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:37.243 ============================================================================== 00:07:37.243 Range in us Cumulative IO count 00:07:37.243 4007.778 - 4032.985: 0.0114% ( 2) 00:07:37.243 4032.985 - 4058.191: 0.0229% ( 2) 00:07:37.243 4058.191 - 4083.397: 0.0343% ( 2) 00:07:37.243 4083.397 - 4108.603: 0.0515% ( 3) 00:07:37.243 4108.603 - 4133.809: 0.0744% ( 4) 00:07:37.243 4133.809 - 4159.015: 0.1087% ( 6) 00:07:37.243 4159.015 - 4184.222: 0.1431% ( 6) 00:07:37.243 4184.222 - 4209.428: 0.2175% ( 13) 00:07:37.243 4209.428 - 4234.634: 0.2747% ( 10) 00:07:37.243 4234.634 - 4259.840: 0.2976% ( 4) 00:07:37.243 4259.840 - 4285.046: 0.3091% ( 2) 00:07:37.243 4285.046 - 4310.252: 0.3205% ( 2) 00:07:37.243 4310.252 - 4335.458: 0.3320% ( 2) 00:07:37.243 4335.458 - 4360.665: 0.3434% ( 2) 00:07:37.243 4360.665 - 4385.871: 0.3549% ( 2) 00:07:37.243 4385.871 - 4411.077: 0.3663% ( 2) 00:07:37.243 5570.560 - 5595.766: 0.3777% ( 2) 00:07:37.243 5595.766 - 5620.972: 0.3949% ( 3) 00:07:37.243 5620.972 - 5646.178: 0.4064% ( 2) 00:07:37.243 5646.178 - 5671.385: 0.4178% ( 2) 00:07:37.243 5671.385 - 5696.591: 0.4522% ( 6) 00:07:37.243 5696.591 - 5721.797: 0.4865% ( 6) 00:07:37.243 5721.797 - 5747.003: 0.5666% ( 14) 00:07:37.243 5747.003 - 5772.209: 0.6296% ( 11) 00:07:37.243 5772.209 - 5797.415: 0.6754% ( 8) 00:07:37.243 5797.415 - 5822.622: 0.7040% ( 5) 00:07:37.243 5822.622 - 5847.828: 0.7326% ( 5) 00:07:37.243 5847.828 - 5873.034: 0.7612% ( 5) 00:07:37.243 5873.034 - 5898.240: 0.7956% ( 6) 00:07:37.243 5898.240 - 5923.446: 0.8585% ( 11) 00:07:37.243 5923.446 - 5948.652: 0.9787% ( 21) 00:07:37.243 5948.652 - 5973.858: 1.0588% ( 14) 00:07:37.243 5973.858 - 5999.065: 1.1619% ( 18) 00:07:37.243 5999.065 - 6024.271: 1.2821% ( 21) 00:07:37.243 6024.271 - 6049.477: 1.3793% ( 17) 00:07:37.243 6049.477 - 6074.683: 1.5282% ( 26) 00:07:37.243 6074.683 - 6099.889: 1.6655% ( 24) 00:07:37.243 6099.889 - 6125.095: 1.8601% ( 34) 00:07:37.243 6125.095 - 6150.302: 2.2207% ( 63) 00:07:37.243 6150.302 - 6175.508: 2.7587% ( 94) 00:07:37.243 6175.508 - 6200.714: 3.0449% ( 50) 00:07:37.243 6200.714 - 6225.920: 3.4512% ( 71) 00:07:37.243 6225.920 - 6251.126: 3.7775% ( 57) 00:07:37.243 6251.126 - 6276.332: 4.3555% ( 101) 00:07:37.243 6276.332 - 6301.538: 4.8020% ( 78) 00:07:37.243 6301.538 - 6326.745: 5.5575% ( 132) 00:07:37.243 6326.745 - 6351.951: 6.3473% ( 138) 00:07:37.243 6351.951 - 6377.157: 7.3890% ( 182) 00:07:37.243 6377.157 - 6402.363: 8.4249% ( 181) 00:07:37.243 6402.363 - 6427.569: 9.8329% ( 246) 00:07:37.243 6427.569 - 6452.775: 11.4583% ( 284) 00:07:37.243 6452.775 - 6503.188: 15.6078% ( 725) 00:07:37.243 6503.188 - 6553.600: 19.3681% ( 657) 00:07:37.243 6553.600 - 6604.012: 24.6108% ( 916) 00:07:37.243 6604.012 - 6654.425: 29.2182% ( 805) 00:07:37.243 6654.425 - 6704.837: 34.0831% ( 850) 00:07:37.243 6704.837 - 6755.249: 38.1696% ( 714) 00:07:37.243 6755.249 - 6805.662: 41.5980% ( 599) 00:07:37.243 6805.662 - 6856.074: 46.7834% ( 906) 00:07:37.243 6856.074 - 6906.486: 51.0703% ( 749) 00:07:37.243 6906.486 - 6956.898: 54.6303% ( 622) 00:07:37.243 6956.898 - 7007.311: 57.6923% ( 535) 00:07:37.243 7007.311 - 7057.723: 60.5884% ( 506) 00:07:37.243 7057.723 - 7108.135: 63.4215% ( 495) 00:07:37.243 7108.135 - 7158.548: 65.0412% ( 283) 00:07:37.243 7158.548 - 7208.960: 66.7296% ( 295) 00:07:37.243 7208.960 - 7259.372: 68.5039% ( 310) 00:07:37.243 7259.372 - 7309.785: 69.9805% ( 258) 00:07:37.243 7309.785 - 7360.197: 71.4457% ( 256) 00:07:37.243 7360.197 - 7410.609: 73.0483% ( 280) 00:07:37.243 7410.609 - 7461.022: 74.5307% ( 259) 00:07:37.243 7461.022 - 7511.434: 76.1103% ( 276) 00:07:37.243 7511.434 - 7561.846: 77.5011% ( 243) 00:07:37.243 7561.846 - 7612.258: 78.6516% ( 201) 00:07:37.243 7612.258 - 7662.671: 79.5673% ( 160) 00:07:37.243 7662.671 - 7713.083: 80.1854% ( 108) 00:07:37.243 7713.083 - 7763.495: 80.8265% ( 112) 00:07:37.243 7763.495 - 7813.908: 81.5247% ( 122) 00:07:37.243 7813.908 - 7864.320: 82.0971% ( 100) 00:07:37.243 7864.320 - 7914.732: 82.7038% ( 106) 00:07:37.243 7914.732 - 7965.145: 82.9785% ( 48) 00:07:37.243 7965.145 - 8015.557: 83.3162% ( 59) 00:07:37.243 8015.557 - 8065.969: 83.7397% ( 74) 00:07:37.243 8065.969 - 8116.382: 84.2090% ( 82) 00:07:37.243 8116.382 - 8166.794: 84.6326% ( 74) 00:07:37.243 8166.794 - 8217.206: 85.4109% ( 136) 00:07:37.243 8217.206 - 8267.618: 86.1779% ( 134) 00:07:37.243 8267.618 - 8318.031: 86.7159% ( 94) 00:07:37.243 8318.031 - 8368.443: 87.1223% ( 71) 00:07:37.243 8368.443 - 8418.855: 87.5572% ( 76) 00:07:37.243 8418.855 - 8469.268: 88.0609% ( 88) 00:07:37.243 8469.268 - 8519.680: 88.5073% ( 78) 00:07:37.243 8519.680 - 8570.092: 88.8049% ( 52) 00:07:37.243 8570.092 - 8620.505: 89.1426% ( 59) 00:07:37.243 8620.505 - 8670.917: 89.4002% ( 45) 00:07:37.243 8670.917 - 8721.329: 89.6921% ( 51) 00:07:37.243 8721.329 - 8771.742: 89.9840% ( 51) 00:07:37.243 8771.742 - 8822.154: 90.3217% ( 59) 00:07:37.243 8822.154 - 8872.566: 90.6937% ( 65) 00:07:37.243 8872.566 - 8922.978: 91.1916% ( 87) 00:07:37.243 8922.978 - 8973.391: 91.3919% ( 35) 00:07:37.243 8973.391 - 9023.803: 91.5408% ( 26) 00:07:37.243 9023.803 - 9074.215: 91.6667% ( 22) 00:07:37.243 9074.215 - 9124.628: 91.7697% ( 18) 00:07:37.243 9124.628 - 9175.040: 91.8613% ( 16) 00:07:37.243 9175.040 - 9225.452: 91.9128% ( 9) 00:07:37.243 9225.452 - 9275.865: 91.9700% ( 10) 00:07:37.243 9275.865 - 9326.277: 92.0330% ( 11) 00:07:37.243 9326.277 - 9376.689: 92.0673% ( 6) 00:07:37.243 9376.689 - 9427.102: 92.1360% ( 12) 00:07:37.243 9427.102 - 9477.514: 92.2333% ( 17) 00:07:37.243 9477.514 - 9527.926: 92.4679% ( 41) 00:07:37.243 9527.926 - 9578.338: 92.6110% ( 25) 00:07:37.243 9578.338 - 9628.751: 92.8056% ( 34) 00:07:37.243 9628.751 - 9679.163: 93.0060% ( 35) 00:07:37.243 9679.163 - 9729.575: 93.1433% ( 24) 00:07:37.243 9729.575 - 9779.988: 93.2921% ( 26) 00:07:37.243 9779.988 - 9830.400: 93.4581% ( 29) 00:07:37.243 9830.400 - 9880.812: 93.5955% ( 24) 00:07:37.243 9880.812 - 9931.225: 93.8530% ( 45) 00:07:37.243 9931.225 - 9981.637: 94.1163% ( 46) 00:07:37.243 9981.637 - 10032.049: 94.3853% ( 47) 00:07:37.243 10032.049 - 10082.462: 94.6200% ( 41) 00:07:37.243 10082.462 - 10132.874: 94.8661% ( 43) 00:07:37.243 10132.874 - 10183.286: 95.1809% ( 55) 00:07:37.243 10183.286 - 10233.698: 95.5071% ( 57) 00:07:37.243 10233.698 - 10284.111: 95.8219% ( 55) 00:07:37.243 10284.111 - 10334.523: 96.0966% ( 48) 00:07:37.243 10334.523 - 10384.935: 96.3141% ( 38) 00:07:37.243 10384.935 - 10435.348: 96.4744% ( 28) 00:07:37.243 10435.348 - 10485.760: 96.5888% ( 20) 00:07:37.243 10485.760 - 10536.172: 96.7262% ( 24) 00:07:37.243 10536.172 - 10586.585: 96.8120% ( 15) 00:07:37.243 10586.585 - 10636.997: 96.9151% ( 18) 00:07:37.243 10636.997 - 10687.409: 97.0353% ( 21) 00:07:37.243 10687.409 - 10737.822: 97.1497% ( 20) 00:07:37.243 10737.822 - 10788.234: 97.2814% ( 23) 00:07:37.243 10788.234 - 10838.646: 97.3901% ( 19) 00:07:37.243 10838.646 - 10889.058: 97.4989% ( 19) 00:07:37.243 10889.058 - 10939.471: 97.5733% ( 13) 00:07:37.243 10939.471 - 10989.883: 97.6534% ( 14) 00:07:37.243 10989.883 - 11040.295: 97.8251% ( 30) 00:07:37.243 11040.295 - 11090.708: 97.9109% ( 15) 00:07:37.243 11090.708 - 11141.120: 97.9682% ( 10) 00:07:37.243 11141.120 - 11191.532: 98.0082% ( 7) 00:07:37.243 11191.532 - 11241.945: 98.0540% ( 8) 00:07:37.243 11241.945 - 11292.357: 98.0712% ( 3) 00:07:37.243 11292.357 - 11342.769: 98.0941% ( 4) 00:07:37.243 11342.769 - 11393.182: 98.1170% ( 4) 00:07:37.244 11393.182 - 11443.594: 98.1685% ( 9) 00:07:37.244 11443.594 - 11494.006: 98.2372% ( 12) 00:07:37.244 11494.006 - 11544.418: 98.3345% ( 17) 00:07:37.244 11544.418 - 11594.831: 98.5462% ( 37) 00:07:37.244 11594.831 - 11645.243: 98.5863% ( 7) 00:07:37.244 11645.243 - 11695.655: 98.6493% ( 11) 00:07:37.244 11695.655 - 11746.068: 98.7866% ( 24) 00:07:37.244 11746.068 - 11796.480: 98.8210% ( 6) 00:07:37.244 11796.480 - 11846.892: 98.8553% ( 6) 00:07:37.244 11846.892 - 11897.305: 98.8725% ( 3) 00:07:37.244 11897.305 - 11947.717: 98.8897% ( 3) 00:07:37.244 11947.717 - 11998.129: 98.9011% ( 2) 00:07:37.244 12804.726 - 12855.138: 98.9068% ( 1) 00:07:37.244 13006.375 - 13107.200: 98.9125% ( 1) 00:07:37.244 13107.200 - 13208.025: 98.9755% ( 11) 00:07:37.244 13208.025 - 13308.849: 99.2273% ( 44) 00:07:37.244 13308.849 - 13409.674: 99.2617% ( 6) 00:07:37.244 13409.674 - 13510.498: 99.2674% ( 1) 00:07:37.244 14216.271 - 14317.095: 99.2731% ( 1) 00:07:37.244 14417.920 - 14518.745: 99.2788% ( 1) 00:07:37.244 14518.745 - 14619.569: 99.3361% ( 10) 00:07:37.244 14619.569 - 14720.394: 99.3761% ( 7) 00:07:37.244 14720.394 - 14821.218: 99.4334% ( 10) 00:07:37.244 14821.218 - 14922.043: 99.4906% ( 10) 00:07:37.244 14922.043 - 15022.868: 99.5192% ( 5) 00:07:37.244 15022.868 - 15123.692: 99.5478% ( 5) 00:07:37.244 15123.692 - 15224.517: 99.5765% ( 5) 00:07:37.244 15224.517 - 15325.342: 99.6051% ( 5) 00:07:37.244 15325.342 - 15426.166: 99.6337% ( 5) 00:07:37.244 18652.554 - 18753.378: 99.6509% ( 3) 00:07:37.244 18753.378 - 18854.203: 99.8111% ( 28) 00:07:37.244 18854.203 - 18955.028: 99.8855% ( 13) 00:07:37.244 18955.028 - 19055.852: 99.9199% ( 6) 00:07:37.244 19055.852 - 19156.677: 99.9370% ( 3) 00:07:37.244 19156.677 - 19257.502: 99.9657% ( 5) 00:07:37.244 19257.502 - 19358.326: 99.9943% ( 5) 00:07:37.244 19358.326 - 19459.151: 100.0000% ( 1) 00:07:37.244 00:07:37.244 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:37.244 ============================================================================== 00:07:37.244 Range in us Cumulative IO count 00:07:37.244 3579.274 - 3604.480: 0.0057% ( 1) 00:07:37.244 3629.686 - 3654.892: 0.0114% ( 1) 00:07:37.244 3755.717 - 3780.923: 0.0172% ( 1) 00:07:37.244 3780.923 - 3806.129: 0.0401% ( 4) 00:07:37.244 3806.129 - 3831.335: 0.0572% ( 3) 00:07:37.244 3831.335 - 3856.542: 0.0744% ( 3) 00:07:37.244 3856.542 - 3881.748: 0.1087% ( 6) 00:07:37.244 3881.748 - 3906.954: 0.2060% ( 17) 00:07:37.244 3906.954 - 3932.160: 0.2804% ( 13) 00:07:37.244 3932.160 - 3957.366: 0.2976% ( 3) 00:07:37.244 3957.366 - 3982.572: 0.3033% ( 1) 00:07:37.244 3982.572 - 4007.778: 0.3148% ( 2) 00:07:37.244 4007.778 - 4032.985: 0.3262% ( 2) 00:07:37.244 4032.985 - 4058.191: 0.3377% ( 2) 00:07:37.244 4058.191 - 4083.397: 0.3491% ( 2) 00:07:37.244 4108.603 - 4133.809: 0.3606% ( 2) 00:07:37.244 4133.809 - 4159.015: 0.3663% ( 1) 00:07:37.244 5368.911 - 5394.117: 0.3720% ( 1) 00:07:37.244 5394.117 - 5419.323: 0.3892% ( 3) 00:07:37.244 5419.323 - 5444.529: 0.4064% ( 3) 00:07:37.244 5444.529 - 5469.735: 0.4350% ( 5) 00:07:37.244 5469.735 - 5494.942: 0.5266% ( 16) 00:07:37.244 5494.942 - 5520.148: 0.6010% ( 13) 00:07:37.244 5520.148 - 5545.354: 0.6467% ( 8) 00:07:37.244 5545.354 - 5570.560: 0.6582% ( 2) 00:07:37.244 5570.560 - 5595.766: 0.6639% ( 1) 00:07:37.244 5595.766 - 5620.972: 0.6754% ( 2) 00:07:37.244 5620.972 - 5646.178: 0.6868% ( 2) 00:07:37.244 5646.178 - 5671.385: 0.6983% ( 2) 00:07:37.244 5671.385 - 5696.591: 0.7097% ( 2) 00:07:37.244 5696.591 - 5721.797: 0.7212% ( 2) 00:07:37.244 5721.797 - 5747.003: 0.7383% ( 3) 00:07:37.244 5747.003 - 5772.209: 0.7440% ( 1) 00:07:37.244 5772.209 - 5797.415: 0.7498% ( 1) 00:07:37.244 5797.415 - 5822.622: 0.7555% ( 1) 00:07:37.244 5822.622 - 5847.828: 0.7612% ( 1) 00:07:37.244 5847.828 - 5873.034: 0.7898% ( 5) 00:07:37.244 5873.034 - 5898.240: 0.8127% ( 4) 00:07:37.244 5898.240 - 5923.446: 0.8299% ( 3) 00:07:37.244 5923.446 - 5948.652: 0.8986% ( 12) 00:07:37.244 5948.652 - 5973.858: 0.9959% ( 17) 00:07:37.244 5973.858 - 5999.065: 1.1218% ( 22) 00:07:37.244 5999.065 - 6024.271: 1.3278% ( 36) 00:07:37.244 6024.271 - 6049.477: 1.4538% ( 22) 00:07:37.244 6049.477 - 6074.683: 1.6083% ( 27) 00:07:37.244 6074.683 - 6099.889: 1.8086% ( 35) 00:07:37.244 6099.889 - 6125.095: 2.1348% ( 57) 00:07:37.244 6125.095 - 6150.302: 2.6099% ( 83) 00:07:37.244 6150.302 - 6175.508: 2.8789% ( 47) 00:07:37.244 6175.508 - 6200.714: 3.2910% ( 72) 00:07:37.244 6200.714 - 6225.920: 3.6229% ( 58) 00:07:37.244 6225.920 - 6251.126: 4.0293% ( 71) 00:07:37.244 6251.126 - 6276.332: 4.6074% ( 101) 00:07:37.244 6276.332 - 6301.538: 5.1912% ( 102) 00:07:37.244 6301.538 - 6326.745: 5.8265% ( 111) 00:07:37.244 6326.745 - 6351.951: 6.6506% ( 144) 00:07:37.244 6351.951 - 6377.157: 7.4290% ( 136) 00:07:37.244 6377.157 - 6402.363: 8.6195% ( 208) 00:07:37.244 6402.363 - 6427.569: 9.9473% ( 232) 00:07:37.244 6427.569 - 6452.775: 11.1722% ( 214) 00:07:37.244 6452.775 - 6503.188: 14.8752% ( 647) 00:07:37.244 6503.188 - 6553.600: 19.3624% ( 784) 00:07:37.244 6553.600 - 6604.012: 23.9526% ( 802) 00:07:37.244 6604.012 - 6654.425: 28.7660% ( 841) 00:07:37.244 6654.425 - 6704.837: 33.4249% ( 814) 00:07:37.244 6704.837 - 6755.249: 38.5073% ( 888) 00:07:37.244 6755.249 - 6805.662: 43.2120% ( 822) 00:07:37.244 6805.662 - 6856.074: 47.9167% ( 822) 00:07:37.244 6856.074 - 6906.486: 51.6369% ( 650) 00:07:37.244 6906.486 - 6956.898: 55.4716% ( 670) 00:07:37.244 6956.898 - 7007.311: 58.1044% ( 460) 00:07:37.244 7007.311 - 7057.723: 60.6799% ( 450) 00:07:37.244 7057.723 - 7108.135: 63.0323% ( 411) 00:07:37.244 7108.135 - 7158.548: 64.9783% ( 340) 00:07:37.244 7158.548 - 7208.960: 66.6209% ( 287) 00:07:37.244 7208.960 - 7259.372: 68.7729% ( 376) 00:07:37.244 7259.372 - 7309.785: 70.1580% ( 242) 00:07:37.244 7309.785 - 7360.197: 72.2413% ( 364) 00:07:37.244 7360.197 - 7410.609: 73.7809% ( 269) 00:07:37.244 7410.609 - 7461.022: 75.0286% ( 218) 00:07:37.244 7461.022 - 7511.434: 76.2706% ( 217) 00:07:37.244 7511.434 - 7561.846: 77.2436% ( 170) 00:07:37.244 7561.846 - 7612.258: 78.0964% ( 149) 00:07:37.244 7612.258 - 7662.671: 78.7889% ( 121) 00:07:37.244 7662.671 - 7713.083: 79.9222% ( 198) 00:07:37.244 7713.083 - 7763.495: 80.9009% ( 171) 00:07:37.244 7763.495 - 7813.908: 81.7823% ( 154) 00:07:37.244 7813.908 - 7864.320: 82.3775% ( 104) 00:07:37.244 7864.320 - 7914.732: 82.8468% ( 82) 00:07:37.244 7914.732 - 7965.145: 83.3848% ( 94) 00:07:37.244 7965.145 - 8015.557: 83.8198% ( 76) 00:07:37.244 8015.557 - 8065.969: 84.4551% ( 111) 00:07:37.244 8065.969 - 8116.382: 85.1648% ( 124) 00:07:37.244 8116.382 - 8166.794: 85.6799% ( 90) 00:07:37.244 8166.794 - 8217.206: 86.3038% ( 109) 00:07:37.244 8217.206 - 8267.618: 86.8361% ( 93) 00:07:37.244 8267.618 - 8318.031: 87.1738% ( 59) 00:07:37.244 8318.031 - 8368.443: 87.5000% ( 57) 00:07:37.244 8368.443 - 8418.855: 87.8606% ( 63) 00:07:37.244 8418.855 - 8469.268: 88.1754% ( 55) 00:07:37.244 8469.268 - 8519.680: 88.5588% ( 67) 00:07:37.245 8519.680 - 8570.092: 88.7763% ( 38) 00:07:37.245 8570.092 - 8620.505: 89.0110% ( 41) 00:07:37.245 8620.505 - 8670.917: 89.3315% ( 56) 00:07:37.245 8670.917 - 8721.329: 89.7779% ( 78) 00:07:37.245 8721.329 - 8771.742: 89.9897% ( 37) 00:07:37.245 8771.742 - 8822.154: 90.4075% ( 73) 00:07:37.245 8822.154 - 8872.566: 90.6536% ( 43) 00:07:37.245 8872.566 - 8922.978: 90.7566% ( 18) 00:07:37.245 8922.978 - 8973.391: 90.8425% ( 15) 00:07:37.245 8973.391 - 9023.803: 90.9341% ( 16) 00:07:37.245 9023.803 - 9074.215: 91.0199% ( 15) 00:07:37.245 9074.215 - 9124.628: 91.1000% ( 14) 00:07:37.245 9124.628 - 9175.040: 91.3633% ( 46) 00:07:37.245 9175.040 - 9225.452: 91.4606% ( 17) 00:07:37.245 9225.452 - 9275.865: 91.5636% ( 18) 00:07:37.245 9275.865 - 9326.277: 91.6953% ( 23) 00:07:37.245 9326.277 - 9376.689: 91.8326% ( 24) 00:07:37.245 9376.689 - 9427.102: 92.0158% ( 32) 00:07:37.245 9427.102 - 9477.514: 92.1360% ( 21) 00:07:37.245 9477.514 - 9527.926: 92.2333% ( 17) 00:07:37.245 9527.926 - 9578.338: 92.3249% ( 16) 00:07:37.245 9578.338 - 9628.751: 92.4279% ( 18) 00:07:37.245 9628.751 - 9679.163: 92.6225% ( 34) 00:07:37.245 9679.163 - 9729.575: 93.0174% ( 69) 00:07:37.245 9729.575 - 9779.988: 93.3894% ( 65) 00:07:37.245 9779.988 - 9830.400: 93.6756% ( 50) 00:07:37.245 9830.400 - 9880.812: 93.9274% ( 44) 00:07:37.245 9880.812 - 9931.225: 94.1163% ( 33) 00:07:37.245 9931.225 - 9981.637: 94.3910% ( 48) 00:07:37.245 9981.637 - 10032.049: 94.6142% ( 39) 00:07:37.245 10032.049 - 10082.462: 94.8031% ( 33) 00:07:37.245 10082.462 - 10132.874: 95.0263% ( 39) 00:07:37.245 10132.874 - 10183.286: 95.3068% ( 49) 00:07:37.245 10183.286 - 10233.698: 95.6158% ( 54) 00:07:37.245 10233.698 - 10284.111: 95.9764% ( 63) 00:07:37.245 10284.111 - 10334.523: 96.2283% ( 44) 00:07:37.245 10334.523 - 10384.935: 96.3313% ( 18) 00:07:37.245 10384.935 - 10435.348: 96.4457% ( 20) 00:07:37.245 10435.348 - 10485.760: 96.5602% ( 20) 00:07:37.245 10485.760 - 10536.172: 96.6632% ( 18) 00:07:37.245 10536.172 - 10586.585: 96.7663% ( 18) 00:07:37.245 10586.585 - 10636.997: 96.8521% ( 15) 00:07:37.245 10636.997 - 10687.409: 96.9036% ( 9) 00:07:37.245 10687.409 - 10737.822: 96.9723% ( 12) 00:07:37.245 10737.822 - 10788.234: 97.0353% ( 11) 00:07:37.245 10788.234 - 10838.646: 97.1383% ( 18) 00:07:37.245 10838.646 - 10889.058: 97.2699% ( 23) 00:07:37.245 10889.058 - 10939.471: 97.4473% ( 31) 00:07:37.245 10939.471 - 10989.883: 97.5962% ( 26) 00:07:37.245 10989.883 - 11040.295: 97.7392% ( 25) 00:07:37.245 11040.295 - 11090.708: 97.8194% ( 14) 00:07:37.245 11090.708 - 11141.120: 97.9453% ( 22) 00:07:37.245 11141.120 - 11191.532: 98.0082% ( 11) 00:07:37.245 11191.532 - 11241.945: 98.0655% ( 10) 00:07:37.245 11241.945 - 11292.357: 98.1284% ( 11) 00:07:37.245 11292.357 - 11342.769: 98.1571% ( 5) 00:07:37.245 11342.769 - 11393.182: 98.1799% ( 4) 00:07:37.245 11393.182 - 11443.594: 98.2028% ( 4) 00:07:37.245 11443.594 - 11494.006: 98.2257% ( 4) 00:07:37.245 11494.006 - 11544.418: 98.2601% ( 6) 00:07:37.245 11544.418 - 11594.831: 98.2944% ( 6) 00:07:37.245 11594.831 - 11645.243: 98.4833% ( 33) 00:07:37.245 11645.243 - 11695.655: 98.4947% ( 2) 00:07:37.245 11695.655 - 11746.068: 98.5176% ( 4) 00:07:37.245 11746.068 - 11796.480: 98.5291% ( 2) 00:07:37.245 11796.480 - 11846.892: 98.5462% ( 3) 00:07:37.245 11846.892 - 11897.305: 98.5749% ( 5) 00:07:37.245 11897.305 - 11947.717: 98.5978% ( 4) 00:07:37.245 11947.717 - 11998.129: 98.6264% ( 5) 00:07:37.245 11998.129 - 12048.542: 98.6722% ( 8) 00:07:37.245 12048.542 - 12098.954: 98.7866% ( 20) 00:07:37.245 12098.954 - 12149.366: 98.8210% ( 6) 00:07:37.245 12149.366 - 12199.778: 98.8610% ( 7) 00:07:37.245 12199.778 - 12250.191: 98.8839% ( 4) 00:07:37.245 12250.191 - 12300.603: 98.9011% ( 3) 00:07:37.245 12905.551 - 13006.375: 98.9068% ( 1) 00:07:37.245 13107.200 - 13208.025: 98.9240% ( 3) 00:07:37.245 13208.025 - 13308.849: 99.1529% ( 40) 00:07:37.245 13308.849 - 13409.674: 99.2331% ( 14) 00:07:37.245 13409.674 - 13510.498: 99.2674% ( 6) 00:07:37.245 14216.271 - 14317.095: 99.2960% ( 5) 00:07:37.245 14317.095 - 14417.920: 99.3304% ( 6) 00:07:37.245 14417.920 - 14518.745: 99.3590% ( 5) 00:07:37.245 14518.745 - 14619.569: 99.3876% ( 5) 00:07:37.245 14619.569 - 14720.394: 99.4391% ( 9) 00:07:37.245 14720.394 - 14821.218: 99.4963% ( 10) 00:07:37.245 14821.218 - 14922.043: 99.5307% ( 6) 00:07:37.245 14922.043 - 15022.868: 99.5593% ( 5) 00:07:37.245 15022.868 - 15123.692: 99.5879% ( 5) 00:07:37.245 15123.692 - 15224.517: 99.6223% ( 6) 00:07:37.245 15224.517 - 15325.342: 99.6337% ( 2) 00:07:37.245 18148.431 - 18249.255: 99.6394% ( 1) 00:07:37.245 18249.255 - 18350.080: 99.6738% ( 6) 00:07:37.245 18350.080 - 18450.905: 99.7024% ( 5) 00:07:37.245 18450.905 - 18551.729: 99.7768% ( 13) 00:07:37.245 18551.729 - 18652.554: 99.8397% ( 11) 00:07:37.245 18652.554 - 18753.378: 99.8798% ( 7) 00:07:37.245 18753.378 - 18854.203: 99.9141% ( 6) 00:07:37.245 18854.203 - 18955.028: 99.9485% ( 6) 00:07:37.245 18955.028 - 19055.852: 99.9886% ( 7) 00:07:37.245 19257.502 - 19358.326: 99.9943% ( 1) 00:07:37.245 19358.326 - 19459.151: 100.0000% ( 1) 00:07:37.245 00:07:37.245 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:37.245 ============================================================================== 00:07:37.245 Range in us Cumulative IO count 00:07:37.245 3352.418 - 3377.625: 0.0057% ( 1) 00:07:37.245 3604.480 - 3629.686: 0.0401% ( 6) 00:07:37.245 3629.686 - 3654.892: 0.0859% ( 8) 00:07:37.245 3654.892 - 3680.098: 0.1374% ( 9) 00:07:37.245 3680.098 - 3705.305: 0.1889% ( 9) 00:07:37.245 3705.305 - 3730.511: 0.2404% ( 9) 00:07:37.245 3730.511 - 3755.717: 0.2919% ( 9) 00:07:37.245 3755.717 - 3780.923: 0.3033% ( 2) 00:07:37.245 3806.129 - 3831.335: 0.3205% ( 3) 00:07:37.245 3831.335 - 3856.542: 0.3262% ( 1) 00:07:37.245 3856.542 - 3881.748: 0.3434% ( 3) 00:07:37.245 3881.748 - 3906.954: 0.3549% ( 2) 00:07:37.245 3906.954 - 3932.160: 0.3663% ( 2) 00:07:37.245 5217.674 - 5242.880: 0.4006% ( 6) 00:07:37.245 5242.880 - 5268.086: 0.4522% ( 9) 00:07:37.245 5268.086 - 5293.292: 0.5208% ( 12) 00:07:37.245 5293.292 - 5318.498: 0.5952% ( 13) 00:07:37.245 5318.498 - 5343.705: 0.6353% ( 7) 00:07:37.245 5343.705 - 5368.911: 0.6467% ( 2) 00:07:37.245 5368.911 - 5394.117: 0.6582% ( 2) 00:07:37.245 5394.117 - 5419.323: 0.6639% ( 1) 00:07:37.245 5419.323 - 5444.529: 0.6754% ( 2) 00:07:37.245 5444.529 - 5469.735: 0.6868% ( 2) 00:07:37.245 5469.735 - 5494.942: 0.6983% ( 2) 00:07:37.245 5494.942 - 5520.148: 0.7097% ( 2) 00:07:37.245 5520.148 - 5545.354: 0.7212% ( 2) 00:07:37.245 5545.354 - 5570.560: 0.7326% ( 2) 00:07:37.245 5595.766 - 5620.972: 0.7383% ( 1) 00:07:37.245 5696.591 - 5721.797: 0.7440% ( 1) 00:07:37.245 5747.003 - 5772.209: 0.7498% ( 1) 00:07:37.245 5772.209 - 5797.415: 0.7555% ( 1) 00:07:37.245 5797.415 - 5822.622: 0.7784% ( 4) 00:07:37.245 5822.622 - 5847.828: 0.7956% ( 3) 00:07:37.245 5847.828 - 5873.034: 0.8070% ( 2) 00:07:37.245 5873.034 - 5898.240: 0.8413% ( 6) 00:07:37.245 5898.240 - 5923.446: 0.8986% ( 10) 00:07:37.245 5923.446 - 5948.652: 0.9902% ( 16) 00:07:37.245 5948.652 - 5973.858: 1.1046% ( 20) 00:07:37.245 5973.858 - 5999.065: 1.2592% ( 27) 00:07:37.245 5999.065 - 6024.271: 1.4251% ( 29) 00:07:37.245 6024.271 - 6049.477: 1.5282% ( 18) 00:07:37.245 6049.477 - 6074.683: 1.7399% ( 37) 00:07:37.245 6074.683 - 6099.889: 1.9689% ( 40) 00:07:37.245 6099.889 - 6125.095: 2.4725% ( 88) 00:07:37.245 6125.095 - 6150.302: 2.8159% ( 60) 00:07:37.245 6150.302 - 6175.508: 3.0220% ( 36) 00:07:37.245 6175.508 - 6200.714: 3.3539% ( 58) 00:07:37.245 6200.714 - 6225.920: 3.7317% ( 66) 00:07:37.245 6225.920 - 6251.126: 4.1094% ( 66) 00:07:37.245 6251.126 - 6276.332: 4.5673% ( 80) 00:07:37.245 6276.332 - 6301.538: 5.0767% ( 89) 00:07:37.245 6301.538 - 6326.745: 5.5746% ( 87) 00:07:37.245 6326.745 - 6351.951: 6.3072% ( 128) 00:07:37.245 6351.951 - 6377.157: 6.9082% ( 105) 00:07:37.245 6377.157 - 6402.363: 7.6694% ( 133) 00:07:37.245 6402.363 - 6427.569: 8.7740% ( 193) 00:07:37.245 6427.569 - 6452.775: 9.9702% ( 209) 00:07:37.245 6452.775 - 6503.188: 12.9178% ( 515) 00:07:37.245 6503.188 - 6553.600: 18.6184% ( 996) 00:07:37.245 6553.600 - 6604.012: 23.5348% ( 859) 00:07:37.245 6604.012 - 6654.425: 28.9435% ( 945) 00:07:37.245 6654.425 - 6704.837: 33.4936% ( 795) 00:07:37.245 6704.837 - 6755.249: 37.9579% ( 780) 00:07:37.245 6755.249 - 6805.662: 42.8285% ( 851) 00:07:37.245 6805.662 - 6856.074: 46.9780% ( 725) 00:07:37.245 6856.074 - 6906.486: 50.9329% ( 691) 00:07:37.245 6906.486 - 6956.898: 54.8478% ( 684) 00:07:37.245 6956.898 - 7007.311: 58.4821% ( 635) 00:07:37.245 7007.311 - 7057.723: 61.1951% ( 474) 00:07:37.245 7057.723 - 7108.135: 63.2784% ( 364) 00:07:37.245 7108.135 - 7158.548: 65.5849% ( 403) 00:07:37.245 7158.548 - 7208.960: 67.8571% ( 397) 00:07:37.245 7208.960 - 7259.372: 70.0778% ( 388) 00:07:37.245 7259.372 - 7309.785: 71.9208% ( 322) 00:07:37.245 7309.785 - 7360.197: 72.8709% ( 166) 00:07:37.245 7360.197 - 7410.609: 73.9412% ( 187) 00:07:37.245 7410.609 - 7461.022: 75.1145% ( 205) 00:07:37.245 7461.022 - 7511.434: 76.2420% ( 197) 00:07:37.245 7511.434 - 7561.846: 77.1062% ( 151) 00:07:37.245 7561.846 - 7612.258: 78.1536% ( 183) 00:07:37.245 7612.258 - 7662.671: 79.0350% ( 154) 00:07:37.245 7662.671 - 7713.083: 80.0881% ( 184) 00:07:37.245 7713.083 - 7763.495: 81.5133% ( 249) 00:07:37.245 7763.495 - 7813.908: 82.3317% ( 143) 00:07:37.245 7813.908 - 7864.320: 83.1273% ( 139) 00:07:37.245 7864.320 - 7914.732: 83.6996% ( 100) 00:07:37.245 7914.732 - 7965.145: 84.2033% ( 88) 00:07:37.246 7965.145 - 8015.557: 84.7470% ( 95) 00:07:37.246 8015.557 - 8065.969: 85.1477% ( 70) 00:07:37.246 8065.969 - 8116.382: 85.6513% ( 88) 00:07:37.246 8116.382 - 8166.794: 85.9947% ( 60) 00:07:37.246 8166.794 - 8217.206: 86.4927% ( 87) 00:07:37.246 8217.206 - 8267.618: 86.9677% ( 83) 00:07:37.246 8267.618 - 8318.031: 87.3340% ( 64) 00:07:37.246 8318.031 - 8368.443: 87.5687% ( 41) 00:07:37.246 8368.443 - 8418.855: 87.9350% ( 64) 00:07:37.246 8418.855 - 8469.268: 88.2498% ( 55) 00:07:37.246 8469.268 - 8519.680: 88.5302% ( 49) 00:07:37.246 8519.680 - 8570.092: 88.9309% ( 70) 00:07:37.246 8570.092 - 8620.505: 89.1712% ( 42) 00:07:37.246 8620.505 - 8670.917: 89.4631% ( 51) 00:07:37.246 8670.917 - 8721.329: 89.8581% ( 69) 00:07:37.246 8721.329 - 8771.742: 90.1271% ( 47) 00:07:37.246 8771.742 - 8822.154: 90.3102% ( 32) 00:07:37.246 8822.154 - 8872.566: 90.6422% ( 58) 00:07:37.246 8872.566 - 8922.978: 91.0256% ( 67) 00:07:37.246 8922.978 - 8973.391: 91.2260% ( 35) 00:07:37.246 8973.391 - 9023.803: 91.3061% ( 14) 00:07:37.246 9023.803 - 9074.215: 91.3404% ( 6) 00:07:37.246 9074.215 - 9124.628: 91.3690% ( 5) 00:07:37.246 9124.628 - 9175.040: 91.4034% ( 6) 00:07:37.246 9175.040 - 9225.452: 91.4492% ( 8) 00:07:37.246 9225.452 - 9275.865: 91.5465% ( 17) 00:07:37.246 9275.865 - 9326.277: 91.6838% ( 24) 00:07:37.246 9326.277 - 9376.689: 92.0043% ( 56) 00:07:37.246 9376.689 - 9427.102: 92.1474% ( 25) 00:07:37.246 9427.102 - 9477.514: 92.2791% ( 23) 00:07:37.246 9477.514 - 9527.926: 92.5481% ( 47) 00:07:37.246 9527.926 - 9578.338: 92.7083% ( 28) 00:07:37.246 9578.338 - 9628.751: 92.7942% ( 15) 00:07:37.246 9628.751 - 9679.163: 92.8972% ( 18) 00:07:37.246 9679.163 - 9729.575: 93.0804% ( 32) 00:07:37.246 9729.575 - 9779.988: 93.3436% ( 46) 00:07:37.246 9779.988 - 9830.400: 93.9560% ( 107) 00:07:37.246 9830.400 - 9880.812: 94.4139% ( 80) 00:07:37.246 9880.812 - 9931.225: 94.6486% ( 41) 00:07:37.246 9931.225 - 9981.637: 94.9176% ( 47) 00:07:37.246 9981.637 - 10032.049: 95.1580% ( 42) 00:07:37.246 10032.049 - 10082.462: 95.3239% ( 29) 00:07:37.246 10082.462 - 10132.874: 95.5014% ( 31) 00:07:37.246 10132.874 - 10183.286: 95.6445% ( 25) 00:07:37.246 10183.286 - 10233.698: 95.7933% ( 26) 00:07:37.246 10233.698 - 10284.111: 95.8906% ( 17) 00:07:37.246 10284.111 - 10334.523: 95.9478% ( 10) 00:07:37.246 10334.523 - 10384.935: 96.0222% ( 13) 00:07:37.246 10384.935 - 10435.348: 96.0966% ( 13) 00:07:37.246 10435.348 - 10485.760: 96.1825% ( 15) 00:07:37.246 10485.760 - 10536.172: 96.2569% ( 13) 00:07:37.246 10536.172 - 10586.585: 96.3255% ( 12) 00:07:37.246 10586.585 - 10636.997: 96.3771% ( 9) 00:07:37.246 10636.997 - 10687.409: 96.4400% ( 11) 00:07:37.246 10687.409 - 10737.822: 96.4744% ( 6) 00:07:37.246 10737.822 - 10788.234: 96.5087% ( 6) 00:07:37.246 10788.234 - 10838.646: 96.5831% ( 13) 00:07:37.246 10838.646 - 10889.058: 96.6976% ( 20) 00:07:37.246 10889.058 - 10939.471: 96.8235% ( 22) 00:07:37.246 10939.471 - 10989.883: 96.9551% ( 23) 00:07:37.246 10989.883 - 11040.295: 97.0410% ( 15) 00:07:37.246 11040.295 - 11090.708: 97.1039% ( 11) 00:07:37.246 11090.708 - 11141.120: 97.2585% ( 27) 00:07:37.246 11141.120 - 11191.532: 97.3558% ( 17) 00:07:37.246 11191.532 - 11241.945: 97.4245% ( 12) 00:07:37.246 11241.945 - 11292.357: 97.5046% ( 14) 00:07:37.246 11292.357 - 11342.769: 97.5675% ( 11) 00:07:37.246 11342.769 - 11393.182: 97.6248% ( 10) 00:07:37.246 11393.182 - 11443.594: 97.6648% ( 7) 00:07:37.246 11443.594 - 11494.006: 97.8308% ( 29) 00:07:37.246 11494.006 - 11544.418: 98.0826% ( 44) 00:07:37.246 11544.418 - 11594.831: 98.1571% ( 13) 00:07:37.246 11594.831 - 11645.243: 98.2200% ( 11) 00:07:37.246 11645.243 - 11695.655: 98.2830% ( 11) 00:07:37.246 11695.655 - 11746.068: 98.3516% ( 12) 00:07:37.246 11746.068 - 11796.480: 98.4089% ( 10) 00:07:37.246 11796.480 - 11846.892: 98.4604% ( 9) 00:07:37.246 11846.892 - 11897.305: 98.5062% ( 8) 00:07:37.246 11897.305 - 11947.717: 98.5462% ( 7) 00:07:37.246 11947.717 - 11998.129: 98.5863% ( 7) 00:07:37.246 11998.129 - 12048.542: 98.6321% ( 8) 00:07:37.246 12048.542 - 12098.954: 98.6722% ( 7) 00:07:37.246 12098.954 - 12149.366: 98.7237% ( 9) 00:07:37.246 12149.366 - 12199.778: 98.7466% ( 4) 00:07:37.246 12199.778 - 12250.191: 98.7752% ( 5) 00:07:37.246 12250.191 - 12300.603: 98.8210% ( 8) 00:07:37.246 12300.603 - 12351.015: 98.8610% ( 7) 00:07:37.246 12351.015 - 12401.428: 98.8954% ( 6) 00:07:37.246 12401.428 - 12451.840: 98.9412% ( 8) 00:07:37.246 12451.840 - 12502.252: 98.9812% ( 7) 00:07:37.246 12502.252 - 12552.665: 99.0213% ( 7) 00:07:37.246 12552.665 - 12603.077: 99.0614% ( 7) 00:07:37.246 12603.077 - 12653.489: 99.0900% ( 5) 00:07:37.246 12653.489 - 12703.902: 99.1071% ( 3) 00:07:37.246 12703.902 - 12754.314: 99.1529% ( 8) 00:07:37.246 12754.314 - 12804.726: 99.1701% ( 3) 00:07:37.246 12804.726 - 12855.138: 99.1815% ( 2) 00:07:37.246 12855.138 - 12905.551: 99.1930% ( 2) 00:07:37.246 12905.551 - 13006.375: 99.2102% ( 3) 00:07:37.246 13006.375 - 13107.200: 99.2331% ( 4) 00:07:37.246 13107.200 - 13208.025: 99.2502% ( 3) 00:07:37.246 13208.025 - 13308.849: 99.2674% ( 3) 00:07:37.246 13611.323 - 13712.148: 99.2731% ( 1) 00:07:37.246 13712.148 - 13812.972: 99.2788% ( 1) 00:07:37.246 13913.797 - 14014.622: 99.3075% ( 5) 00:07:37.246 14014.622 - 14115.446: 99.3590% ( 9) 00:07:37.246 14115.446 - 14216.271: 99.4105% ( 9) 00:07:37.246 14216.271 - 14317.095: 99.4620% ( 9) 00:07:37.246 14317.095 - 14417.920: 99.4963% ( 6) 00:07:37.246 14417.920 - 14518.745: 99.5364% ( 7) 00:07:37.246 14518.745 - 14619.569: 99.5593% ( 4) 00:07:37.246 14619.569 - 14720.394: 99.5879% ( 5) 00:07:37.246 14720.394 - 14821.218: 99.6108% ( 4) 00:07:37.246 14821.218 - 14922.043: 99.6337% ( 4) 00:07:37.246 18148.431 - 18249.255: 99.6451% ( 2) 00:07:37.246 18249.255 - 18350.080: 99.6566% ( 2) 00:07:37.246 18450.905 - 18551.729: 99.6623% ( 1) 00:07:37.246 18551.729 - 18652.554: 99.6680% ( 1) 00:07:37.246 18652.554 - 18753.378: 99.9199% ( 44) 00:07:37.246 18753.378 - 18854.203: 99.9657% ( 8) 00:07:37.246 18854.203 - 18955.028: 99.9771% ( 2) 00:07:37.246 19055.852 - 19156.677: 99.9943% ( 3) 00:07:37.246 19156.677 - 19257.502: 100.0000% ( 1) 00:07:37.246 00:07:37.504 10:28:12 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:37.504 00:07:37.504 real 0m2.421s 00:07:37.504 user 0m2.144s 00:07:37.504 sys 0m0.177s 00:07:37.504 10:28:12 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.504 10:28:12 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:37.504 ************************************ 00:07:37.504 END TEST nvme_perf 00:07:37.504 ************************************ 00:07:37.504 10:28:12 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:37.504 10:28:12 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:37.504 10:28:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.504 10:28:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.504 ************************************ 00:07:37.504 START TEST nvme_hello_world 00:07:37.504 ************************************ 00:07:37.504 10:28:12 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:37.504 Initializing NVMe Controllers 00:07:37.504 Attached to 0000:00:10.0 00:07:37.504 Namespace ID: 1 size: 6GB 00:07:37.504 Attached to 0000:00:11.0 00:07:37.504 Namespace ID: 1 size: 5GB 00:07:37.504 Attached to 0000:00:13.0 00:07:37.504 Namespace ID: 1 size: 1GB 00:07:37.504 Attached to 0000:00:12.0 00:07:37.504 Namespace ID: 1 size: 4GB 00:07:37.504 Namespace ID: 2 size: 4GB 00:07:37.504 Namespace ID: 3 size: 4GB 00:07:37.504 Initialization complete. 00:07:37.504 INFO: using host memory buffer for IO 00:07:37.504 Hello world! 00:07:37.504 INFO: using host memory buffer for IO 00:07:37.504 Hello world! 00:07:37.504 INFO: using host memory buffer for IO 00:07:37.504 Hello world! 00:07:37.504 INFO: using host memory buffer for IO 00:07:37.504 Hello world! 00:07:37.504 INFO: using host memory buffer for IO 00:07:37.504 Hello world! 00:07:37.504 INFO: using host memory buffer for IO 00:07:37.504 Hello world! 00:07:37.504 00:07:37.504 real 0m0.194s 00:07:37.504 user 0m0.058s 00:07:37.504 sys 0m0.093s 00:07:37.504 10:28:12 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.504 10:28:12 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:37.504 ************************************ 00:07:37.504 END TEST nvme_hello_world 00:07:37.504 ************************************ 00:07:37.762 10:28:12 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:37.762 10:28:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.762 10:28:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.762 10:28:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.762 ************************************ 00:07:37.762 START TEST nvme_sgl 00:07:37.762 ************************************ 00:07:37.762 10:28:12 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:37.762 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:37.762 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:37.762 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:37.762 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:37.762 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:37.762 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:37.762 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:37.762 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:37.762 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:37.762 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:37.762 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:37.762 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:37.762 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:37.762 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:37.762 NVMe Readv/Writev Request test 00:07:37.762 Attached to 0000:00:10.0 00:07:37.762 Attached to 0000:00:11.0 00:07:37.762 Attached to 0000:00:13.0 00:07:37.762 Attached to 0000:00:12.0 00:07:37.762 0000:00:10.0: build_io_request_2 test passed 00:07:37.762 0000:00:10.0: build_io_request_4 test passed 00:07:37.762 0000:00:10.0: build_io_request_5 test passed 00:07:37.762 0000:00:10.0: build_io_request_6 test passed 00:07:37.762 0000:00:10.0: build_io_request_7 test passed 00:07:37.762 0000:00:10.0: build_io_request_10 test passed 00:07:37.762 0000:00:11.0: build_io_request_2 test passed 00:07:37.762 0000:00:11.0: build_io_request_4 test passed 00:07:37.762 0000:00:11.0: build_io_request_5 test passed 00:07:37.762 0000:00:11.0: build_io_request_6 test passed 00:07:37.762 0000:00:11.0: build_io_request_7 test passed 00:07:37.762 0000:00:11.0: build_io_request_10 test passed 00:07:37.762 Cleaning up... 00:07:38.020 00:07:38.020 real 0m0.233s 00:07:38.020 user 0m0.104s 00:07:38.020 sys 0m0.088s 00:07:38.020 10:28:12 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.020 10:28:12 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:38.020 ************************************ 00:07:38.020 END TEST nvme_sgl 00:07:38.020 ************************************ 00:07:38.020 10:28:12 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:38.020 10:28:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.020 10:28:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.020 10:28:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.020 ************************************ 00:07:38.020 START TEST nvme_e2edp 00:07:38.020 ************************************ 00:07:38.020 10:28:12 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:38.020 NVMe Write/Read with End-to-End data protection test 00:07:38.020 Attached to 0000:00:10.0 00:07:38.020 Attached to 0000:00:11.0 00:07:38.020 Attached to 0000:00:13.0 00:07:38.020 Attached to 0000:00:12.0 00:07:38.020 Cleaning up... 00:07:38.020 00:07:38.020 real 0m0.188s 00:07:38.020 user 0m0.055s 00:07:38.020 sys 0m0.087s 00:07:38.020 10:28:12 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.020 10:28:12 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:38.020 ************************************ 00:07:38.020 END TEST nvme_e2edp 00:07:38.020 ************************************ 00:07:38.278 10:28:12 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:38.278 10:28:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.278 10:28:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.278 10:28:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.278 ************************************ 00:07:38.278 START TEST nvme_reserve 00:07:38.278 ************************************ 00:07:38.278 10:28:12 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:38.278 ===================================================== 00:07:38.278 NVMe Controller at PCI bus 0, device 16, function 0 00:07:38.278 ===================================================== 00:07:38.278 Reservations: Not Supported 00:07:38.278 ===================================================== 00:07:38.278 NVMe Controller at PCI bus 0, device 17, function 0 00:07:38.278 ===================================================== 00:07:38.278 Reservations: Not Supported 00:07:38.278 ===================================================== 00:07:38.278 NVMe Controller at PCI bus 0, device 19, function 0 00:07:38.278 ===================================================== 00:07:38.278 Reservations: Not Supported 00:07:38.278 ===================================================== 00:07:38.278 NVMe Controller at PCI bus 0, device 18, function 0 00:07:38.278 ===================================================== 00:07:38.278 Reservations: Not Supported 00:07:38.278 Reservation test passed 00:07:38.278 00:07:38.278 real 0m0.182s 00:07:38.278 user 0m0.054s 00:07:38.278 sys 0m0.083s 00:07:38.278 10:28:12 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.278 10:28:12 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:38.278 ************************************ 00:07:38.278 END TEST nvme_reserve 00:07:38.278 ************************************ 00:07:38.278 10:28:13 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:38.278 10:28:13 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.278 10:28:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.278 10:28:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.278 ************************************ 00:07:38.278 START TEST nvme_err_injection 00:07:38.278 ************************************ 00:07:38.278 10:28:13 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:38.535 NVMe Error Injection test 00:07:38.535 Attached to 0000:00:10.0 00:07:38.535 Attached to 0000:00:11.0 00:07:38.535 Attached to 0000:00:13.0 00:07:38.535 Attached to 0000:00:12.0 00:07:38.535 0000:00:11.0: get features failed as expected 00:07:38.535 0000:00:13.0: get features failed as expected 00:07:38.535 0000:00:12.0: get features failed as expected 00:07:38.535 0000:00:10.0: get features failed as expected 00:07:38.535 0000:00:10.0: get features successfully as expected 00:07:38.535 0000:00:11.0: get features successfully as expected 00:07:38.535 0000:00:13.0: get features successfully as expected 00:07:38.535 0000:00:12.0: get features successfully as expected 00:07:38.535 0000:00:10.0: read failed as expected 00:07:38.535 0000:00:11.0: read failed as expected 00:07:38.535 0000:00:12.0: read failed as expected 00:07:38.535 0000:00:13.0: read failed as expected 00:07:38.535 0000:00:10.0: read successfully as expected 00:07:38.535 0000:00:13.0: read successfully as expected 00:07:38.535 0000:00:11.0: read successfully as expected 00:07:38.535 0000:00:12.0: read successfully as expected 00:07:38.535 Cleaning up... 00:07:38.535 00:07:38.535 real 0m0.208s 00:07:38.535 user 0m0.066s 00:07:38.535 sys 0m0.085s 00:07:38.535 10:28:13 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.535 10:28:13 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:38.535 ************************************ 00:07:38.535 END TEST nvme_err_injection 00:07:38.535 ************************************ 00:07:38.535 10:28:13 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:38.535 10:28:13 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:38.535 10:28:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.535 10:28:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.535 ************************************ 00:07:38.535 START TEST nvme_overhead 00:07:38.535 ************************************ 00:07:38.535 10:28:13 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:39.959 Initializing NVMe Controllers 00:07:39.959 Attached to 0000:00:10.0 00:07:39.959 Attached to 0000:00:11.0 00:07:39.959 Attached to 0000:00:13.0 00:07:39.959 Attached to 0000:00:12.0 00:07:39.959 Initialization complete. Launching workers. 00:07:39.959 submit (in ns) avg, min, max = 11317.9, 10230.0, 252000.8 00:07:39.959 complete (in ns) avg, min, max = 7561.7, 7179.2, 218599.2 00:07:39.959 00:07:39.959 Submit histogram 00:07:39.959 ================ 00:07:39.959 Range in us Cumulative Count 00:07:39.959 10.191 - 10.240: 0.0058% ( 1) 00:07:39.959 10.683 - 10.732: 0.0175% ( 2) 00:07:39.959 10.732 - 10.782: 0.1404% ( 21) 00:07:39.959 10.782 - 10.831: 0.8247% ( 117) 00:07:39.959 10.831 - 10.880: 4.5678% ( 640) 00:07:39.959 10.880 - 10.929: 15.7211% ( 1907) 00:07:39.959 10.929 - 10.978: 32.8342% ( 2926) 00:07:39.959 10.978 - 11.028: 49.4385% ( 2839) 00:07:39.959 11.028 - 11.077: 62.0131% ( 2150) 00:07:39.959 11.077 - 11.126: 69.9556% ( 1358) 00:07:39.959 11.126 - 11.175: 74.4882% ( 775) 00:07:39.959 11.175 - 11.225: 77.5763% ( 528) 00:07:39.959 11.225 - 11.274: 79.7169% ( 366) 00:07:39.959 11.274 - 11.323: 81.2025% ( 254) 00:07:39.959 11.323 - 11.372: 82.5184% ( 225) 00:07:39.959 11.372 - 11.422: 83.8402% ( 226) 00:07:39.959 11.422 - 11.471: 85.2380% ( 239) 00:07:39.959 11.471 - 11.520: 86.6066% ( 234) 00:07:39.959 11.520 - 11.569: 88.2033% ( 273) 00:07:39.959 11.569 - 11.618: 89.3964% ( 204) 00:07:39.959 11.618 - 11.668: 90.3439% ( 162) 00:07:39.959 11.668 - 11.717: 91.2797% ( 160) 00:07:39.959 11.717 - 11.766: 92.2272% ( 162) 00:07:39.959 11.766 - 11.815: 93.1162% ( 152) 00:07:39.959 11.815 - 11.865: 93.7946% ( 116) 00:07:39.959 11.865 - 11.914: 94.4204% ( 107) 00:07:39.959 11.914 - 11.963: 94.8883% ( 80) 00:07:39.959 11.963 - 12.012: 95.2918% ( 69) 00:07:39.959 12.012 - 12.062: 95.5199% ( 39) 00:07:39.959 12.062 - 12.111: 95.6954% ( 30) 00:07:39.959 12.111 - 12.160: 95.8475% ( 26) 00:07:39.959 12.160 - 12.209: 95.9820% ( 23) 00:07:39.959 12.209 - 12.258: 96.0990% ( 20) 00:07:39.959 12.258 - 12.308: 96.1691% ( 12) 00:07:39.959 12.308 - 12.357: 96.2218% ( 9) 00:07:39.959 12.357 - 12.406: 96.2861% ( 11) 00:07:39.959 12.406 - 12.455: 96.3329% ( 8) 00:07:39.959 12.455 - 12.505: 96.3797% ( 8) 00:07:39.959 12.505 - 12.554: 96.4265% ( 8) 00:07:39.959 12.554 - 12.603: 96.4616% ( 6) 00:07:39.959 12.603 - 12.702: 96.4908% ( 5) 00:07:39.959 12.702 - 12.800: 96.5785% ( 15) 00:07:39.959 12.800 - 12.898: 96.6546% ( 13) 00:07:39.959 12.898 - 12.997: 96.7774% ( 21) 00:07:39.959 12.997 - 13.095: 96.8476% ( 12) 00:07:39.959 13.095 - 13.194: 96.9704% ( 21) 00:07:39.959 13.194 - 13.292: 97.1634% ( 33) 00:07:39.959 13.292 - 13.391: 97.2804% ( 20) 00:07:39.959 13.391 - 13.489: 97.4149% ( 23) 00:07:39.959 13.489 - 13.588: 97.4909% ( 13) 00:07:39.959 13.588 - 13.686: 97.6079% ( 20) 00:07:39.959 13.686 - 13.785: 97.6488% ( 7) 00:07:39.959 13.785 - 13.883: 97.7366% ( 15) 00:07:39.959 13.883 - 13.982: 97.7951% ( 10) 00:07:39.959 13.982 - 14.080: 97.8594% ( 11) 00:07:39.959 14.080 - 14.178: 97.9296% ( 12) 00:07:39.959 14.178 - 14.277: 97.9764% ( 8) 00:07:39.959 14.277 - 14.375: 97.9998% ( 4) 00:07:39.959 14.375 - 14.474: 98.0115% ( 2) 00:07:39.959 14.474 - 14.572: 98.0232% ( 2) 00:07:39.959 14.572 - 14.671: 98.0641% ( 7) 00:07:39.959 14.671 - 14.769: 98.1050% ( 7) 00:07:39.959 14.769 - 14.868: 98.1401% ( 6) 00:07:39.959 14.868 - 14.966: 98.1869% ( 8) 00:07:39.959 14.966 - 15.065: 98.2688% ( 14) 00:07:39.959 15.065 - 15.163: 98.2805% ( 2) 00:07:39.959 15.163 - 15.262: 98.3039% ( 4) 00:07:39.959 15.262 - 15.360: 98.3448% ( 7) 00:07:39.959 15.360 - 15.458: 98.3799% ( 6) 00:07:39.959 15.458 - 15.557: 98.4209% ( 7) 00:07:39.959 15.557 - 15.655: 98.4384% ( 3) 00:07:39.959 15.655 - 15.754: 98.4677% ( 5) 00:07:39.959 15.754 - 15.852: 98.4794% ( 2) 00:07:39.959 15.852 - 15.951: 98.4969% ( 3) 00:07:39.959 15.951 - 16.049: 98.5027% ( 1) 00:07:39.959 16.049 - 16.148: 98.5086% ( 1) 00:07:39.959 16.148 - 16.246: 98.5261% ( 3) 00:07:39.959 16.246 - 16.345: 98.5437% ( 3) 00:07:39.959 16.345 - 16.443: 98.5495% ( 1) 00:07:39.959 16.443 - 16.542: 98.5905% ( 7) 00:07:39.959 16.542 - 16.640: 98.6373% ( 8) 00:07:39.959 16.640 - 16.738: 98.7075% ( 12) 00:07:39.959 16.738 - 16.837: 98.8069% ( 17) 00:07:39.959 16.837 - 16.935: 98.8771% ( 12) 00:07:39.959 16.935 - 17.034: 98.9472% ( 12) 00:07:39.959 17.034 - 17.132: 98.9940% ( 8) 00:07:39.959 17.132 - 17.231: 99.1052% ( 19) 00:07:39.959 17.231 - 17.329: 99.1812% ( 13) 00:07:39.959 17.329 - 17.428: 99.2923% ( 19) 00:07:39.959 17.428 - 17.526: 99.3157% ( 4) 00:07:39.959 17.526 - 17.625: 99.3917% ( 13) 00:07:39.959 17.625 - 17.723: 99.4561% ( 11) 00:07:39.959 17.723 - 17.822: 99.5029% ( 8) 00:07:39.959 17.822 - 17.920: 99.5321% ( 5) 00:07:39.959 17.920 - 18.018: 99.5730% ( 7) 00:07:39.959 18.018 - 18.117: 99.5906% ( 3) 00:07:39.959 18.117 - 18.215: 99.6257% ( 6) 00:07:39.959 18.215 - 18.314: 99.6491% ( 4) 00:07:39.959 18.314 - 18.412: 99.7017% ( 9) 00:07:39.959 18.412 - 18.511: 99.7193% ( 3) 00:07:39.959 18.511 - 18.609: 99.7427% ( 4) 00:07:39.959 18.708 - 18.806: 99.7485% ( 1) 00:07:39.959 18.905 - 19.003: 99.7544% ( 1) 00:07:39.959 19.003 - 19.102: 99.7602% ( 1) 00:07:39.959 19.200 - 19.298: 99.7778% ( 3) 00:07:39.959 19.298 - 19.397: 99.7836% ( 1) 00:07:39.959 19.397 - 19.495: 99.7894% ( 1) 00:07:39.959 19.495 - 19.594: 99.7953% ( 1) 00:07:39.959 19.594 - 19.692: 99.8011% ( 1) 00:07:39.959 19.692 - 19.791: 99.8187% ( 3) 00:07:39.959 20.185 - 20.283: 99.8245% ( 1) 00:07:39.959 20.972 - 21.071: 99.8304% ( 1) 00:07:39.959 21.071 - 21.169: 99.8362% ( 1) 00:07:39.959 21.169 - 21.268: 99.8479% ( 2) 00:07:39.959 21.268 - 21.366: 99.8538% ( 1) 00:07:39.959 21.366 - 21.465: 99.8655% ( 2) 00:07:39.959 21.465 - 21.563: 99.8713% ( 1) 00:07:39.959 21.858 - 21.957: 99.8772% ( 1) 00:07:39.959 22.055 - 22.154: 99.8830% ( 1) 00:07:39.959 22.942 - 23.040: 99.8947% ( 2) 00:07:39.959 23.040 - 23.138: 99.9006% ( 1) 00:07:39.959 23.335 - 23.434: 99.9064% ( 1) 00:07:39.959 24.025 - 24.123: 99.9123% ( 1) 00:07:39.959 25.108 - 25.206: 99.9240% ( 2) 00:07:39.959 25.206 - 25.403: 99.9298% ( 1) 00:07:39.959 25.600 - 25.797: 99.9357% ( 1) 00:07:39.959 26.191 - 26.388: 99.9415% ( 1) 00:07:39.959 26.585 - 26.782: 99.9474% ( 1) 00:07:39.959 27.175 - 27.372: 99.9532% ( 1) 00:07:39.959 31.311 - 31.508: 99.9591% ( 1) 00:07:39.959 34.265 - 34.462: 99.9649% ( 1) 00:07:39.959 50.018 - 50.215: 99.9708% ( 1) 00:07:39.959 62.622 - 63.015: 99.9766% ( 1) 00:07:39.959 77.194 - 77.588: 99.9825% ( 1) 00:07:39.959 157.538 - 158.326: 99.9883% ( 1) 00:07:39.959 244.185 - 245.760: 99.9942% ( 1) 00:07:39.959 250.486 - 252.062: 100.0000% ( 1) 00:07:39.959 00:07:39.959 Complete histogram 00:07:39.959 ================== 00:07:39.959 Range in us Cumulative Count 00:07:39.959 7.138 - 7.188: 0.0175% ( 3) 00:07:39.959 7.188 - 7.237: 0.1813% ( 28) 00:07:39.960 7.237 - 7.286: 3.0237% ( 486) 00:07:39.960 7.286 - 7.335: 16.2358% ( 2259) 00:07:39.960 7.335 - 7.385: 40.3907% ( 4130) 00:07:39.960 7.385 - 7.434: 66.0487% ( 4387) 00:07:39.960 7.434 - 7.483: 81.6528% ( 2668) 00:07:39.960 7.483 - 7.532: 89.5134% ( 1344) 00:07:39.960 7.532 - 7.582: 93.1863% ( 628) 00:07:39.960 7.582 - 7.631: 94.8532% ( 285) 00:07:39.960 7.631 - 7.680: 95.6252% ( 132) 00:07:39.960 7.680 - 7.729: 95.9410% ( 54) 00:07:39.960 7.729 - 7.778: 96.1165% ( 30) 00:07:39.960 7.778 - 7.828: 96.2452% ( 22) 00:07:39.960 7.828 - 7.877: 96.3095% ( 11) 00:07:39.960 7.877 - 7.926: 96.3797% ( 12) 00:07:39.960 7.926 - 7.975: 96.4148% ( 6) 00:07:39.960 7.975 - 8.025: 96.4616% ( 8) 00:07:39.960 8.025 - 8.074: 96.5376% ( 13) 00:07:39.960 8.074 - 8.123: 96.6955% ( 27) 00:07:39.960 8.123 - 8.172: 96.8593% ( 28) 00:07:39.960 8.172 - 8.222: 97.1166% ( 44) 00:07:39.960 8.222 - 8.271: 97.3857% ( 46) 00:07:39.960 8.271 - 8.320: 97.6488% ( 45) 00:07:39.960 8.320 - 8.369: 97.7658% ( 20) 00:07:39.960 8.369 - 8.418: 97.8185% ( 9) 00:07:39.960 8.418 - 8.468: 97.8711% ( 9) 00:07:39.960 8.468 - 8.517: 97.9296% ( 10) 00:07:39.960 8.517 - 8.566: 97.9413% ( 2) 00:07:39.960 8.566 - 8.615: 97.9471% ( 1) 00:07:39.960 8.812 - 8.862: 97.9588% ( 2) 00:07:39.960 8.862 - 8.911: 97.9647% ( 1) 00:07:39.960 8.960 - 9.009: 97.9705% ( 1) 00:07:39.960 9.058 - 9.108: 97.9764% ( 1) 00:07:39.960 9.108 - 9.157: 97.9881% ( 2) 00:07:39.960 9.157 - 9.206: 97.9998% ( 2) 00:07:39.960 9.206 - 9.255: 98.0115% ( 2) 00:07:39.960 9.255 - 9.305: 98.0173% ( 1) 00:07:39.960 9.305 - 9.354: 98.0232% ( 1) 00:07:39.960 9.354 - 9.403: 98.0349% ( 2) 00:07:39.960 9.452 - 9.502: 98.0524% ( 3) 00:07:39.960 9.502 - 9.551: 98.0583% ( 1) 00:07:39.960 9.551 - 9.600: 98.0699% ( 2) 00:07:39.960 9.600 - 9.649: 98.0875% ( 3) 00:07:39.960 9.649 - 9.698: 98.0992% ( 2) 00:07:39.960 9.698 - 9.748: 98.1050% ( 1) 00:07:39.960 9.748 - 9.797: 98.1167% ( 2) 00:07:39.960 9.797 - 9.846: 98.1284% ( 2) 00:07:39.960 9.895 - 9.945: 98.1343% ( 1) 00:07:39.960 9.945 - 9.994: 98.1518% ( 3) 00:07:39.960 10.043 - 10.092: 98.1635% ( 2) 00:07:39.960 10.092 - 10.142: 98.1869% ( 4) 00:07:39.960 10.142 - 10.191: 98.2103% ( 4) 00:07:39.960 10.191 - 10.240: 98.2220% ( 2) 00:07:39.960 10.240 - 10.289: 98.2337% ( 2) 00:07:39.960 10.289 - 10.338: 98.2454% ( 2) 00:07:39.960 10.338 - 10.388: 98.2747% ( 5) 00:07:39.960 10.388 - 10.437: 98.2863% ( 2) 00:07:39.960 10.437 - 10.486: 98.2922% ( 1) 00:07:39.960 10.486 - 10.535: 98.3039% ( 2) 00:07:39.960 10.535 - 10.585: 98.3214% ( 3) 00:07:39.960 10.585 - 10.634: 98.3331% ( 2) 00:07:39.960 10.634 - 10.683: 98.3507% ( 3) 00:07:39.960 10.683 - 10.732: 98.3624% ( 2) 00:07:39.960 10.732 - 10.782: 98.3741% ( 2) 00:07:39.960 10.782 - 10.831: 98.3858% ( 2) 00:07:39.960 10.831 - 10.880: 98.3975% ( 2) 00:07:39.960 10.880 - 10.929: 98.4033% ( 1) 00:07:39.960 11.323 - 11.372: 98.4092% ( 1) 00:07:39.960 11.569 - 11.618: 98.4150% ( 1) 00:07:39.960 11.865 - 11.914: 98.4209% ( 1) 00:07:39.960 12.062 - 12.111: 98.4267% ( 1) 00:07:39.960 12.160 - 12.209: 98.4326% ( 1) 00:07:39.960 12.406 - 12.455: 98.4384% ( 1) 00:07:39.960 12.554 - 12.603: 98.4443% ( 1) 00:07:39.960 12.603 - 12.702: 98.4501% ( 1) 00:07:39.960 12.702 - 12.800: 98.4560% ( 1) 00:07:39.960 12.800 - 12.898: 98.4618% ( 1) 00:07:39.960 12.898 - 12.997: 98.5144% ( 9) 00:07:39.960 12.997 - 13.095: 98.6022% ( 15) 00:07:39.960 13.095 - 13.194: 98.6724% ( 12) 00:07:39.960 13.194 - 13.292: 98.7250% ( 9) 00:07:39.960 13.292 - 13.391: 98.8361% ( 19) 00:07:39.960 13.391 - 13.489: 98.9122% ( 13) 00:07:39.960 13.489 - 13.588: 98.9940% ( 14) 00:07:39.960 13.588 - 13.686: 99.0876% ( 16) 00:07:39.960 13.686 - 13.785: 99.1578% ( 12) 00:07:39.960 13.785 - 13.883: 99.2397% ( 14) 00:07:39.960 13.883 - 13.982: 99.3040% ( 11) 00:07:39.960 13.982 - 14.080: 99.3917% ( 15) 00:07:39.960 14.080 - 14.178: 99.4268% ( 6) 00:07:39.960 14.178 - 14.277: 99.4912% ( 11) 00:07:39.960 14.277 - 14.375: 99.5204% ( 5) 00:07:39.960 14.375 - 14.474: 99.5672% ( 8) 00:07:39.960 14.474 - 14.572: 99.6257% ( 10) 00:07:39.960 14.572 - 14.671: 99.6549% ( 5) 00:07:39.960 14.671 - 14.769: 99.6725% ( 3) 00:07:39.960 14.769 - 14.868: 99.7017% ( 5) 00:07:39.960 14.868 - 14.966: 99.7544% ( 9) 00:07:39.960 14.966 - 15.065: 99.7661% ( 2) 00:07:39.960 15.065 - 15.163: 99.7836% ( 3) 00:07:39.960 15.163 - 15.262: 99.8011% ( 3) 00:07:39.960 15.360 - 15.458: 99.8128% ( 2) 00:07:39.960 15.458 - 15.557: 99.8187% ( 1) 00:07:39.960 15.852 - 15.951: 99.8304% ( 2) 00:07:39.960 16.148 - 16.246: 99.8421% ( 2) 00:07:39.960 16.443 - 16.542: 99.8479% ( 1) 00:07:39.960 16.640 - 16.738: 99.8538% ( 1) 00:07:39.960 17.132 - 17.231: 99.8596% ( 1) 00:07:39.960 17.526 - 17.625: 99.8713% ( 2) 00:07:39.960 17.625 - 17.723: 99.8772% ( 1) 00:07:39.960 17.723 - 17.822: 99.8830% ( 1) 00:07:39.960 17.920 - 18.018: 99.8889% ( 1) 00:07:39.960 18.018 - 18.117: 99.8947% ( 1) 00:07:39.960 18.117 - 18.215: 99.9006% ( 1) 00:07:39.960 18.314 - 18.412: 99.9064% ( 1) 00:07:39.960 18.412 - 18.511: 99.9123% ( 1) 00:07:39.960 18.609 - 18.708: 99.9181% ( 1) 00:07:39.960 18.708 - 18.806: 99.9240% ( 1) 00:07:39.960 18.806 - 18.905: 99.9357% ( 2) 00:07:39.960 19.003 - 19.102: 99.9532% ( 3) 00:07:39.960 21.071 - 21.169: 99.9591% ( 1) 00:07:39.960 21.268 - 21.366: 99.9649% ( 1) 00:07:39.960 22.154 - 22.252: 99.9708% ( 1) 00:07:39.960 31.114 - 31.311: 99.9766% ( 1) 00:07:39.960 39.188 - 39.385: 99.9825% ( 1) 00:07:39.960 65.378 - 65.772: 99.9883% ( 1) 00:07:39.960 65.772 - 66.166: 99.9942% ( 1) 00:07:39.960 217.403 - 218.978: 100.0000% ( 1) 00:07:39.960 00:07:39.960 00:07:39.960 real 0m1.179s 00:07:39.960 user 0m1.052s 00:07:39.960 sys 0m0.083s 00:07:39.960 10:28:14 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.960 10:28:14 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:39.960 ************************************ 00:07:39.960 END TEST nvme_overhead 00:07:39.960 ************************************ 00:07:39.960 10:28:14 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:39.960 10:28:14 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:39.960 10:28:14 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.960 10:28:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.960 ************************************ 00:07:39.960 START TEST nvme_arbitration 00:07:39.960 ************************************ 00:07:39.960 10:28:14 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:43.257 Initializing NVMe Controllers 00:07:43.257 Attached to 0000:00:10.0 00:07:43.257 Attached to 0000:00:11.0 00:07:43.257 Attached to 0000:00:13.0 00:07:43.257 Attached to 0000:00:12.0 00:07:43.257 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:43.257 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:43.257 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:43.257 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:43.257 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:43.257 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:43.257 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:43.257 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:43.257 Initialization complete. Launching workers. 00:07:43.257 Starting thread on core 1 with urgent priority queue 00:07:43.257 Starting thread on core 2 with urgent priority queue 00:07:43.257 Starting thread on core 3 with urgent priority queue 00:07:43.257 Starting thread on core 0 with urgent priority queue 00:07:43.257 QEMU NVMe Ctrl (12340 ) core 0: 6357.33 IO/s 15.73 secs/100000 ios 00:07:43.257 QEMU NVMe Ctrl (12342 ) core 0: 6357.33 IO/s 15.73 secs/100000 ios 00:07:43.257 QEMU NVMe Ctrl (12341 ) core 1: 6250.67 IO/s 16.00 secs/100000 ios 00:07:43.257 QEMU NVMe Ctrl (12342 ) core 1: 6250.67 IO/s 16.00 secs/100000 ios 00:07:43.257 QEMU NVMe Ctrl (12343 ) core 2: 5973.33 IO/s 16.74 secs/100000 ios 00:07:43.257 QEMU NVMe Ctrl (12342 ) core 3: 5952.00 IO/s 16.80 secs/100000 ios 00:07:43.257 ======================================================== 00:07:43.257 00:07:43.257 00:07:43.257 real 0m3.211s 00:07:43.257 user 0m9.015s 00:07:43.257 sys 0m0.100s 00:07:43.257 10:28:17 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.257 10:28:17 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:43.257 ************************************ 00:07:43.257 END TEST nvme_arbitration 00:07:43.257 ************************************ 00:07:43.257 10:28:17 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:43.257 10:28:17 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:43.257 10:28:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.257 10:28:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.257 ************************************ 00:07:43.257 START TEST nvme_single_aen 00:07:43.257 ************************************ 00:07:43.257 10:28:17 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:43.257 Asynchronous Event Request test 00:07:43.257 Attached to 0000:00:10.0 00:07:43.257 Attached to 0000:00:11.0 00:07:43.257 Attached to 0000:00:13.0 00:07:43.257 Attached to 0000:00:12.0 00:07:43.257 Reset controller to setup AER completions for this process 00:07:43.257 Registering asynchronous event callbacks... 00:07:43.257 Getting orig temperature thresholds of all controllers 00:07:43.257 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:43.257 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:43.257 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:43.257 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:43.257 Setting all controllers temperature threshold low to trigger AER 00:07:43.257 Waiting for all controllers temperature threshold to be set lower 00:07:43.257 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:43.257 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:43.257 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:43.257 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:43.257 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:43.257 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:43.257 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:43.257 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:43.257 Waiting for all controllers to trigger AER and reset threshold 00:07:43.257 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.257 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.257 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.257 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.257 Cleaning up... 00:07:43.257 00:07:43.257 real 0m0.195s 00:07:43.257 user 0m0.064s 00:07:43.257 sys 0m0.087s 00:07:43.257 10:28:17 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.257 ************************************ 00:07:43.257 END TEST nvme_single_aen 00:07:43.257 ************************************ 00:07:43.257 10:28:17 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:43.257 10:28:18 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:43.257 10:28:18 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.257 10:28:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.257 10:28:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.257 ************************************ 00:07:43.257 START TEST nvme_doorbell_aers 00:07:43.257 ************************************ 00:07:43.257 10:28:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:07:43.257 10:28:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:43.257 10:28:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:43.257 10:28:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:43.257 10:28:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:43.257 10:28:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:43.258 10:28:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:43.258 10:28:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:43.258 10:28:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:43.258 10:28:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:43.518 10:28:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:43.518 10:28:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:43.518 10:28:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:43.518 10:28:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:43.518 [2024-09-28 10:28:18.266849] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:07:53.527 Executing: test_write_invalid_db 00:07:53.527 Waiting for AER completion... 00:07:53.527 Failure: test_write_invalid_db 00:07:53.527 00:07:53.527 Executing: test_invalid_db_write_overflow_sq 00:07:53.527 Waiting for AER completion... 00:07:53.527 Failure: test_invalid_db_write_overflow_sq 00:07:53.527 00:07:53.527 Executing: test_invalid_db_write_overflow_cq 00:07:53.527 Waiting for AER completion... 00:07:53.527 Failure: test_invalid_db_write_overflow_cq 00:07:53.527 00:07:53.527 10:28:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:53.527 10:28:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:53.527 [2024-09-28 10:28:28.300129] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:03.490 Executing: test_write_invalid_db 00:08:03.490 Waiting for AER completion... 00:08:03.490 Failure: test_write_invalid_db 00:08:03.490 00:08:03.490 Executing: test_invalid_db_write_overflow_sq 00:08:03.490 Waiting for AER completion... 00:08:03.490 Failure: test_invalid_db_write_overflow_sq 00:08:03.490 00:08:03.490 Executing: test_invalid_db_write_overflow_cq 00:08:03.490 Waiting for AER completion... 00:08:03.490 Failure: test_invalid_db_write_overflow_cq 00:08:03.490 00:08:03.490 10:28:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:03.491 10:28:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:03.749 [2024-09-28 10:28:38.319773] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:13.717 Executing: test_write_invalid_db 00:08:13.717 Waiting for AER completion... 00:08:13.717 Failure: test_write_invalid_db 00:08:13.717 00:08:13.717 Executing: test_invalid_db_write_overflow_sq 00:08:13.717 Waiting for AER completion... 00:08:13.717 Failure: test_invalid_db_write_overflow_sq 00:08:13.717 00:08:13.717 Executing: test_invalid_db_write_overflow_cq 00:08:13.717 Waiting for AER completion... 00:08:13.717 Failure: test_invalid_db_write_overflow_cq 00:08:13.717 00:08:13.717 10:28:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:13.717 10:28:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:13.717 [2024-09-28 10:28:48.350872] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.682 Executing: test_write_invalid_db 00:08:23.682 Waiting for AER completion... 00:08:23.682 Failure: test_write_invalid_db 00:08:23.682 00:08:23.682 Executing: test_invalid_db_write_overflow_sq 00:08:23.682 Waiting for AER completion... 00:08:23.683 Failure: test_invalid_db_write_overflow_sq 00:08:23.683 00:08:23.683 Executing: test_invalid_db_write_overflow_cq 00:08:23.683 Waiting for AER completion... 00:08:23.683 Failure: test_invalid_db_write_overflow_cq 00:08:23.683 00:08:23.683 00:08:23.683 real 0m40.180s 00:08:23.683 user 0m34.069s 00:08:23.683 sys 0m5.722s 00:08:23.683 10:28:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.683 ************************************ 00:08:23.683 END TEST nvme_doorbell_aers 00:08:23.683 ************************************ 00:08:23.683 10:28:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:23.683 10:28:58 nvme -- nvme/nvme.sh@97 -- # uname 00:08:23.683 10:28:58 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:23.683 10:28:58 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:23.683 10:28:58 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:23.683 10:28:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.683 10:28:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.683 ************************************ 00:08:23.683 START TEST nvme_multi_aen 00:08:23.683 ************************************ 00:08:23.683 10:28:58 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:23.683 [2024-09-28 10:28:58.389121] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.389500] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.389552] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.390686] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.390762] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.390796] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.391938] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.392009] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.392047] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.393118] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.393182] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 [2024-09-28 10:28:58.393214] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76383) is not found. Dropping the request. 00:08:23.683 Child process pid: 76909 00:08:23.941 [Child] Asynchronous Event Request test 00:08:23.941 [Child] Attached to 0000:00:10.0 00:08:23.941 [Child] Attached to 0000:00:11.0 00:08:23.941 [Child] Attached to 0000:00:13.0 00:08:23.941 [Child] Attached to 0000:00:12.0 00:08:23.941 [Child] Registering asynchronous event callbacks... 00:08:23.941 [Child] Getting orig temperature thresholds of all controllers 00:08:23.941 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.941 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.942 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.942 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.942 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:23.942 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.942 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.942 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.942 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.942 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.942 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.942 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.942 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.942 [Child] Cleaning up... 00:08:23.942 Asynchronous Event Request test 00:08:23.942 Attached to 0000:00:10.0 00:08:23.942 Attached to 0000:00:11.0 00:08:23.942 Attached to 0000:00:13.0 00:08:23.942 Attached to 0000:00:12.0 00:08:23.942 Reset controller to setup AER completions for this process 00:08:23.942 Registering asynchronous event callbacks... 00:08:23.942 Getting orig temperature thresholds of all controllers 00:08:23.942 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.942 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.942 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.942 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.942 Setting all controllers temperature threshold low to trigger AER 00:08:23.942 Waiting for all controllers temperature threshold to be set lower 00:08:23.942 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.942 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:23.942 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.942 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:23.942 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.942 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:23.942 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.942 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:23.942 Waiting for all controllers to trigger AER and reset threshold 00:08:23.942 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.942 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.942 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.942 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.942 Cleaning up... 00:08:23.942 00:08:23.942 real 0m0.383s 00:08:23.942 user 0m0.115s 00:08:23.942 sys 0m0.165s 00:08:23.942 10:28:58 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.942 10:28:58 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:23.942 ************************************ 00:08:23.942 END TEST nvme_multi_aen 00:08:23.942 ************************************ 00:08:23.942 10:28:58 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:23.942 10:28:58 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:23.942 10:28:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.942 10:28:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.942 ************************************ 00:08:23.942 START TEST nvme_startup 00:08:23.942 ************************************ 00:08:23.942 10:28:58 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:24.201 Initializing NVMe Controllers 00:08:24.201 Attached to 0000:00:10.0 00:08:24.201 Attached to 0000:00:11.0 00:08:24.201 Attached to 0000:00:13.0 00:08:24.201 Attached to 0000:00:12.0 00:08:24.201 Initialization complete. 00:08:24.201 Time used:106200.156 (us). 00:08:24.201 00:08:24.201 real 0m0.155s 00:08:24.201 user 0m0.046s 00:08:24.201 sys 0m0.080s 00:08:24.201 10:28:58 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.201 10:28:58 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:24.201 ************************************ 00:08:24.201 END TEST nvme_startup 00:08:24.201 ************************************ 00:08:24.201 10:28:58 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:24.201 10:28:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.201 10:28:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.201 10:28:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.201 ************************************ 00:08:24.201 START TEST nvme_multi_secondary 00:08:24.201 ************************************ 00:08:24.201 10:28:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:24.201 10:28:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76960 00:08:24.201 10:28:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:24.201 10:28:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76961 00:08:24.201 10:28:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:24.201 10:28:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:27.484 Initializing NVMe Controllers 00:08:27.484 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:27.484 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:27.484 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:27.484 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:27.484 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:27.484 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:27.484 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:27.484 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:27.484 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:27.484 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:27.484 Initialization complete. Launching workers. 00:08:27.484 ======================================================== 00:08:27.484 Latency(us) 00:08:27.484 Device Information : IOPS MiB/s Average min max 00:08:27.484 PCIE (0000:00:10.0) NSID 1 from core 2: 3309.01 12.93 4833.30 753.43 15673.17 00:08:27.484 PCIE (0000:00:11.0) NSID 1 from core 2: 3309.01 12.93 4835.28 771.38 14631.66 00:08:27.484 PCIE (0000:00:13.0) NSID 1 from core 2: 3309.01 12.93 4835.32 784.40 13978.05 00:08:27.484 PCIE (0000:00:12.0) NSID 1 from core 2: 3309.01 12.93 4834.93 787.80 14047.08 00:08:27.484 PCIE (0000:00:12.0) NSID 2 from core 2: 3309.01 12.93 4835.37 784.81 15251.65 00:08:27.484 PCIE (0000:00:12.0) NSID 3 from core 2: 3309.01 12.93 4835.01 779.69 15610.85 00:08:27.484 ======================================================== 00:08:27.484 Total : 19854.08 77.55 4834.87 753.43 15673.17 00:08:27.484 00:08:27.484 10:29:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76960 00:08:27.484 Initializing NVMe Controllers 00:08:27.484 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:27.484 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:27.484 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:27.484 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:27.484 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:27.484 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:27.484 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:27.484 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:27.484 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:27.484 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:27.484 Initialization complete. Launching workers. 00:08:27.484 ======================================================== 00:08:27.484 Latency(us) 00:08:27.484 Device Information : IOPS MiB/s Average min max 00:08:27.484 PCIE (0000:00:10.0) NSID 1 from core 1: 7932.45 30.99 2015.69 1003.50 7973.69 00:08:27.484 PCIE (0000:00:11.0) NSID 1 from core 1: 7932.45 30.99 2016.63 1074.09 7798.25 00:08:27.484 PCIE (0000:00:13.0) NSID 1 from core 1: 7932.45 30.99 2016.68 1007.17 9723.23 00:08:27.484 PCIE (0000:00:12.0) NSID 1 from core 1: 7932.45 30.99 2016.64 1026.12 10353.87 00:08:27.484 PCIE (0000:00:12.0) NSID 2 from core 1: 7932.45 30.99 2016.62 1000.37 8751.25 00:08:27.484 PCIE (0000:00:12.0) NSID 3 from core 1: 7932.45 30.99 2016.72 1025.69 8841.02 00:08:27.484 ======================================================== 00:08:27.484 Total : 47594.68 185.92 2016.50 1000.37 10353.87 00:08:27.484 00:08:29.382 Initializing NVMe Controllers 00:08:29.382 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:29.382 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:29.382 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:29.382 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:29.382 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:29.382 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:29.382 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:29.382 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:29.382 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:29.382 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:29.382 Initialization complete. Launching workers. 00:08:29.382 ======================================================== 00:08:29.382 Latency(us) 00:08:29.382 Device Information : IOPS MiB/s Average min max 00:08:29.382 PCIE (0000:00:10.0) NSID 1 from core 0: 10984.04 42.91 1455.43 681.69 8838.37 00:08:29.382 PCIE (0000:00:11.0) NSID 1 from core 0: 10988.84 42.93 1455.64 666.90 7938.84 00:08:29.382 PCIE (0000:00:13.0) NSID 1 from core 0: 10987.64 42.92 1455.78 601.62 8840.86 00:08:29.382 PCIE (0000:00:12.0) NSID 1 from core 0: 10988.04 42.92 1455.70 528.33 7832.77 00:08:29.382 PCIE (0000:00:12.0) NSID 2 from core 0: 10986.04 42.91 1455.95 443.33 8373.76 00:08:29.382 PCIE (0000:00:12.0) NSID 3 from core 0: 10988.24 42.92 1455.61 379.57 9922.82 00:08:29.382 ======================================================== 00:08:29.382 Total : 65922.84 257.51 1455.68 379.57 9922.82 00:08:29.382 00:08:29.382 10:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76961 00:08:29.382 10:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77030 00:08:29.382 10:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:29.382 10:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77031 00:08:29.382 10:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:29.382 10:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:32.673 Initializing NVMe Controllers 00:08:32.673 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.674 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.674 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.674 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.674 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:32.674 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:32.674 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:32.674 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:32.674 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:32.674 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:32.674 Initialization complete. Launching workers. 00:08:32.674 ======================================================== 00:08:32.674 Latency(us) 00:08:32.674 Device Information : IOPS MiB/s Average min max 00:08:32.674 PCIE (0000:00:10.0) NSID 1 from core 0: 6537.16 25.54 2446.19 690.97 12705.60 00:08:32.674 PCIE (0000:00:11.0) NSID 1 from core 0: 6537.16 25.54 2447.68 715.81 11851.49 00:08:32.674 PCIE (0000:00:13.0) NSID 1 from core 0: 6537.16 25.54 2447.98 714.83 12196.01 00:08:32.674 PCIE (0000:00:12.0) NSID 1 from core 0: 6537.16 25.54 2448.32 729.38 12895.88 00:08:32.674 PCIE (0000:00:12.0) NSID 2 from core 0: 6537.16 25.54 2448.47 726.80 14351.43 00:08:32.674 PCIE (0000:00:12.0) NSID 3 from core 0: 6537.16 25.54 2448.41 715.77 12801.95 00:08:32.674 ======================================================== 00:08:32.674 Total : 39222.99 153.21 2447.84 690.97 14351.43 00:08:32.674 00:08:32.940 Initializing NVMe Controllers 00:08:32.941 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.941 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.941 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.941 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.941 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:32.941 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:32.941 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:32.941 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:32.941 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:32.941 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:32.941 Initialization complete. Launching workers. 00:08:32.941 ======================================================== 00:08:32.941 Latency(us) 00:08:32.941 Device Information : IOPS MiB/s Average min max 00:08:32.941 PCIE (0000:00:10.0) NSID 1 from core 1: 6071.83 23.72 2633.62 1014.70 14547.00 00:08:32.941 PCIE (0000:00:11.0) NSID 1 from core 1: 6071.83 23.72 2634.55 1050.86 14547.05 00:08:32.941 PCIE (0000:00:13.0) NSID 1 from core 1: 6071.83 23.72 2634.45 1046.04 12515.58 00:08:32.941 PCIE (0000:00:12.0) NSID 1 from core 1: 6071.83 23.72 2634.34 1054.00 11415.87 00:08:32.941 PCIE (0000:00:12.0) NSID 2 from core 1: 6071.83 23.72 2634.25 833.09 11070.69 00:08:32.941 PCIE (0000:00:12.0) NSID 3 from core 1: 6071.83 23.72 2634.15 672.43 11790.43 00:08:32.941 ======================================================== 00:08:32.941 Total : 36430.97 142.31 2634.23 672.43 14547.05 00:08:32.941 00:08:34.855 Initializing NVMe Controllers 00:08:34.855 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:34.855 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:34.855 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:34.855 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:34.855 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:34.855 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:34.855 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:34.855 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:34.855 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:34.855 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:34.855 Initialization complete. Launching workers. 00:08:34.855 ======================================================== 00:08:34.855 Latency(us) 00:08:34.855 Device Information : IOPS MiB/s Average min max 00:08:34.855 PCIE (0000:00:10.0) NSID 1 from core 2: 2539.92 9.92 6297.44 923.90 28509.00 00:08:34.855 PCIE (0000:00:11.0) NSID 1 from core 2: 2539.92 9.92 6298.88 952.89 28277.80 00:08:34.855 PCIE (0000:00:13.0) NSID 1 from core 2: 2539.92 9.92 6298.75 958.66 25473.02 00:08:34.855 PCIE (0000:00:12.0) NSID 1 from core 2: 2539.92 9.92 6298.34 953.04 27617.56 00:08:34.855 PCIE (0000:00:12.0) NSID 2 from core 2: 2539.92 9.92 6298.69 839.67 27218.81 00:08:34.855 PCIE (0000:00:12.0) NSID 3 from core 2: 2539.92 9.92 6298.59 952.12 33686.76 00:08:34.855 ======================================================== 00:08:34.855 Total : 15239.54 59.53 6298.45 839.67 33686.76 00:08:34.855 00:08:34.855 10:29:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77030 00:08:34.855 10:29:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77031 00:08:34.855 00:08:34.856 real 0m10.619s 00:08:34.856 user 0m18.244s 00:08:34.856 sys 0m0.606s 00:08:34.856 10:29:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.856 10:29:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:34.856 ************************************ 00:08:34.856 END TEST nvme_multi_secondary 00:08:34.856 ************************************ 00:08:34.856 10:29:09 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:34.856 10:29:09 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:34.856 10:29:09 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/75997 ]] 00:08:34.856 10:29:09 nvme -- common/autotest_common.sh@1090 -- # kill 75997 00:08:34.856 10:29:09 nvme -- common/autotest_common.sh@1091 -- # wait 75997 00:08:34.856 [2024-09-28 10:29:09.505758] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.505831] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.505853] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.505870] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.506505] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.506549] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.506568] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.506583] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.507126] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.507174] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.507195] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.507210] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.507726] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.507770] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.507789] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 [2024-09-28 10:29:09.507805] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76908) is not found. Dropping the request. 00:08:34.856 10:29:09 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:34.856 10:29:09 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:34.856 10:29:09 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:34.856 10:29:09 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.856 10:29:09 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.856 10:29:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.856 ************************************ 00:08:34.856 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:34.856 ************************************ 00:08:34.856 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:35.118 * Looking for test storage... 00:08:35.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.118 --rc genhtml_branch_coverage=1 00:08:35.118 --rc genhtml_function_coverage=1 00:08:35.118 --rc genhtml_legend=1 00:08:35.118 --rc geninfo_all_blocks=1 00:08:35.118 --rc geninfo_unexecuted_blocks=1 00:08:35.118 00:08:35.118 ' 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.118 --rc genhtml_branch_coverage=1 00:08:35.118 --rc genhtml_function_coverage=1 00:08:35.118 --rc genhtml_legend=1 00:08:35.118 --rc geninfo_all_blocks=1 00:08:35.118 --rc geninfo_unexecuted_blocks=1 00:08:35.118 00:08:35.118 ' 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.118 --rc genhtml_branch_coverage=1 00:08:35.118 --rc genhtml_function_coverage=1 00:08:35.118 --rc genhtml_legend=1 00:08:35.118 --rc geninfo_all_blocks=1 00:08:35.118 --rc geninfo_unexecuted_blocks=1 00:08:35.118 00:08:35.118 ' 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.118 --rc genhtml_branch_coverage=1 00:08:35.118 --rc genhtml_function_coverage=1 00:08:35.118 --rc genhtml_legend=1 00:08:35.118 --rc geninfo_all_blocks=1 00:08:35.118 --rc geninfo_unexecuted_blocks=1 00:08:35.118 00:08:35.118 ' 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:35.118 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77192 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77192 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 77192 ']' 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:35.119 10:29:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:35.380 [2024-09-28 10:29:09.897481] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:35.380 [2024-09-28 10:29:09.897627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77192 ] 00:08:35.380 [2024-09-28 10:29:10.043995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.380 [2024-09-28 10:29:10.062654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.380 [2024-09-28 10:29:10.116250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.380 [2024-09-28 10:29:10.116554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.380 [2024-09-28 10:29:10.116846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.380 [2024-09-28 10:29:10.116939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:36.320 nvme0n1 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_bSglY.txt 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:36.320 true 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727519350 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77215 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:36.320 10:29:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:38.229 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:38.229 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.229 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.229 [2024-09-28 10:29:12.821055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:38.229 [2024-09-28 10:29:12.821297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:38.229 [2024-09-28 10:29:12.821321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:38.229 [2024-09-28 10:29:12.821336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.229 [2024-09-28 10:29:12.825066] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.229 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.229 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77215 00:08:38.229 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77215 00:08:38.229 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77215 00:08:38.229 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:38.229 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_bSglY.txt 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_bSglY.txt 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77192 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 77192 ']' 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 77192 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77192 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.230 killing process with pid 77192 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77192' 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 77192 00:08:38.230 10:29:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 77192 00:08:38.488 10:29:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:38.488 10:29:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:38.488 00:08:38.488 real 0m3.598s 00:08:38.488 user 0m12.611s 00:08:38.488 sys 0m0.555s 00:08:38.488 10:29:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.488 ************************************ 00:08:38.488 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:38.488 ************************************ 00:08:38.488 10:29:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.488 10:29:13 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:38.488 10:29:13 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:38.488 10:29:13 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.488 10:29:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.488 10:29:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.488 ************************************ 00:08:38.488 START TEST nvme_fio 00:08:38.488 ************************************ 00:08:38.488 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:38.488 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:38.488 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:38.488 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:38.488 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:38.488 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:38.488 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:38.748 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:38.748 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:38.748 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:38.748 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:38.748 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:38.748 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:38.748 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:38.748 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:38.748 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:39.010 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:39.010 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:39.010 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:39.010 10:29:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:39.010 10:29:13 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:39.271 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:39.271 fio-3.35 00:08:39.271 Starting 1 thread 00:08:44.557 00:08:44.557 test: (groupid=0, jobs=1): err= 0: pid=77344: Sat Sep 28 10:29:19 2024 00:08:44.557 read: IOPS=17.5k, BW=68.3MiB/s (71.7MB/s)(138MiB/2026msec) 00:08:44.557 slat (nsec): min=3310, max=79376, avg=5264.20, stdev=2466.84 00:08:44.557 clat (usec): min=871, max=34637, avg=3318.50, stdev=1534.87 00:08:44.557 lat (usec): min=875, max=34642, avg=3323.77, stdev=1535.52 00:08:44.557 clat percentiles (usec): 00:08:44.557 | 1.00th=[ 1532], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2442], 00:08:44.557 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2999], 00:08:44.557 | 70.00th=[ 3326], 80.00th=[ 4113], 90.00th=[ 5080], 95.00th=[ 6128], 00:08:44.557 | 99.00th=[ 8094], 99.50th=[ 8979], 99.90th=[10945], 99.95th=[30540], 00:08:44.557 | 99.99th=[33817] 00:08:44.557 bw ( KiB/s): min=35864, max=84136, per=100.00%, avg=70822.00, stdev=23406.23, samples=4 00:08:44.557 iops : min= 8964, max=21034, avg=17705.00, stdev=5852.55, samples=4 00:08:44.557 write: IOPS=17.5k, BW=68.3MiB/s (71.6MB/s)(138MiB/2026msec); 0 zone resets 00:08:44.557 slat (nsec): min=3417, max=63902, avg=5373.00, stdev=2355.38 00:08:44.557 clat (usec): min=845, max=67276, avg=3977.72, stdev=5454.96 00:08:44.557 lat (usec): min=856, max=67281, avg=3983.10, stdev=5455.09 00:08:44.557 clat percentiles (usec): 00:08:44.557 | 1.00th=[ 1598], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2474], 00:08:44.557 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 3032], 00:08:44.557 | 70.00th=[ 3326], 80.00th=[ 4113], 90.00th=[ 5145], 95.00th=[ 6259], 00:08:44.557 | 99.00th=[42730], 99.50th=[44827], 99.90th=[50594], 99.95th=[60556], 00:08:44.557 | 99.99th=[65799] 00:08:44.557 bw ( KiB/s): min=35208, max=84176, per=100.00%, avg=70688.00, stdev=23763.69, samples=4 00:08:44.557 iops : min= 8802, max=21044, avg=17672.00, stdev=5940.92, samples=4 00:08:44.557 lat (usec) : 1000=0.06% 00:08:44.557 lat (msec) : 2=2.59%, 4=76.25%, 10=20.01%, 20=0.12%, 50=0.92% 00:08:44.557 lat (msec) : 100=0.05% 00:08:44.557 cpu : usr=99.11%, sys=0.15%, ctx=5, majf=0, minf=624 00:08:44.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:44.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:44.557 issued rwts: total=35444,35438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:44.557 00:08:44.557 Run status group 0 (all jobs): 00:08:44.557 READ: bw=68.3MiB/s (71.7MB/s), 68.3MiB/s-68.3MiB/s (71.7MB/s-71.7MB/s), io=138MiB (145MB), run=2026-2026msec 00:08:44.557 WRITE: bw=68.3MiB/s (71.6MB/s), 68.3MiB/s-68.3MiB/s (71.6MB/s-71.6MB/s), io=138MiB (145MB), run=2026-2026msec 00:08:44.818 ----------------------------------------------------- 00:08:44.818 Suppressions used: 00:08:44.818 count bytes template 00:08:44.818 1 32 /usr/src/fio/parse.c 00:08:44.818 1 8 libtcmalloc_minimal.so 00:08:44.818 ----------------------------------------------------- 00:08:44.818 00:08:44.818 10:29:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:44.818 10:29:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:44.818 10:29:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:44.818 10:29:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:45.080 10:29:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:45.080 10:29:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:45.342 10:29:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:45.342 10:29:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:45.342 10:29:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:45.342 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:45.342 fio-3.35 00:08:45.342 Starting 1 thread 00:08:50.629 00:08:50.629 test: (groupid=0, jobs=1): err= 0: pid=77400: Sat Sep 28 10:29:25 2024 00:08:50.630 read: IOPS=17.2k, BW=67.1MiB/s (70.4MB/s)(134MiB/2001msec) 00:08:50.630 slat (nsec): min=4859, max=86406, avg=6300.05, stdev=3058.10 00:08:50.630 clat (usec): min=228, max=13210, avg=3693.85, stdev=1121.75 00:08:50.630 lat (usec): min=233, max=13258, avg=3700.15, stdev=1122.95 00:08:50.630 clat percentiles (usec): 00:08:50.630 | 1.00th=[ 2147], 5.00th=[ 2737], 10.00th=[ 2868], 20.00th=[ 2999], 00:08:50.630 | 30.00th=[ 3097], 40.00th=[ 3195], 50.00th=[ 3326], 60.00th=[ 3458], 00:08:50.630 | 70.00th=[ 3687], 80.00th=[ 4146], 90.00th=[ 5276], 95.00th=[ 6194], 00:08:50.630 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9241], 99.95th=[10290], 00:08:50.630 | 99.99th=[13042] 00:08:50.630 bw ( KiB/s): min=65928, max=69616, per=98.31%, avg=67578.67, stdev=1874.16, samples=3 00:08:50.630 iops : min=16482, max=17404, avg=16894.67, stdev=468.54, samples=3 00:08:50.630 write: IOPS=17.2k, BW=67.2MiB/s (70.5MB/s)(134MiB/2001msec); 0 zone resets 00:08:50.630 slat (nsec): min=4985, max=85398, avg=6455.96, stdev=3149.72 00:08:50.630 clat (usec): min=237, max=13123, avg=3724.72, stdev=1142.88 00:08:50.630 lat (usec): min=243, max=13142, avg=3731.18, stdev=1144.08 00:08:50.630 clat percentiles (usec): 00:08:50.630 | 1.00th=[ 2180], 5.00th=[ 2769], 10.00th=[ 2900], 20.00th=[ 3032], 00:08:50.630 | 30.00th=[ 3130], 40.00th=[ 3228], 50.00th=[ 3326], 60.00th=[ 3490], 00:08:50.630 | 70.00th=[ 3687], 80.00th=[ 4146], 90.00th=[ 5342], 95.00th=[ 6259], 00:08:50.630 | 99.00th=[ 8094], 99.50th=[ 8586], 99.90th=[ 9634], 99.95th=[10421], 00:08:50.630 | 99.99th=[12911] 00:08:50.630 bw ( KiB/s): min=65704, max=69424, per=98.08%, avg=67498.67, stdev=1863.44, samples=3 00:08:50.630 iops : min=16426, max=17356, avg=16874.67, stdev=465.86, samples=3 00:08:50.630 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:08:50.630 lat (msec) : 2=0.64%, 4=77.43%, 10=21.81%, 20=0.07% 00:08:50.630 cpu : usr=98.70%, sys=0.10%, ctx=4, majf=0, minf=624 00:08:50.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:50.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.630 issued rwts: total=34388,34428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.630 00:08:50.630 Run status group 0 (all jobs): 00:08:50.630 READ: bw=67.1MiB/s (70.4MB/s), 67.1MiB/s-67.1MiB/s (70.4MB/s-70.4MB/s), io=134MiB (141MB), run=2001-2001msec 00:08:50.630 WRITE: bw=67.2MiB/s (70.5MB/s), 67.2MiB/s-67.2MiB/s (70.5MB/s-70.5MB/s), io=134MiB (141MB), run=2001-2001msec 00:08:50.630 ----------------------------------------------------- 00:08:50.630 Suppressions used: 00:08:50.630 count bytes template 00:08:50.630 1 32 /usr/src/fio/parse.c 00:08:50.630 1 8 libtcmalloc_minimal.so 00:08:50.630 ----------------------------------------------------- 00:08:50.630 00:08:50.630 10:29:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:50.630 10:29:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:50.630 10:29:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:50.630 10:29:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:50.906 10:29:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:50.906 10:29:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:50.906 10:29:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:50.906 10:29:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:50.906 10:29:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.164 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.164 fio-3.35 00:08:51.164 Starting 1 thread 00:08:59.271 00:08:59.271 test: (groupid=0, jobs=1): err= 0: pid=77461: Sat Sep 28 10:29:32 2024 00:08:59.271 read: IOPS=24.3k, BW=94.8MiB/s (99.4MB/s)(190MiB/2001msec) 00:08:59.271 slat (nsec): min=4202, max=62706, avg=4870.44, stdev=1948.32 00:08:59.271 clat (usec): min=768, max=11823, avg=2637.98, stdev=732.72 00:08:59.271 lat (usec): min=773, max=11872, avg=2642.85, stdev=734.02 00:08:59.271 clat percentiles (usec): 00:08:59.271 | 1.00th=[ 1827], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:59.271 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:08:59.271 | 70.00th=[ 2540], 80.00th=[ 2573], 90.00th=[ 2769], 95.00th=[ 4080], 00:08:59.271 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7767], 99.95th=[ 8455], 00:08:59.271 | 99.99th=[11469] 00:08:59.271 bw ( KiB/s): min=93192, max=98176, per=98.60%, avg=95744.00, stdev=2494.17, samples=3 00:08:59.271 iops : min=23298, max=24544, avg=23936.00, stdev=623.54, samples=3 00:08:59.271 write: IOPS=24.1k, BW=94.2MiB/s (98.8MB/s)(189MiB/2001msec); 0 zone resets 00:08:59.271 slat (nsec): min=4319, max=55236, avg=5128.02, stdev=1946.99 00:08:59.271 clat (usec): min=325, max=11649, avg=2632.42, stdev=720.21 00:08:59.271 lat (usec): min=330, max=11664, avg=2637.54, stdev=721.48 00:08:59.271 clat percentiles (usec): 00:08:59.271 | 1.00th=[ 1811], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:59.271 | 30.00th=[ 2442], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:08:59.271 | 70.00th=[ 2540], 80.00th=[ 2573], 90.00th=[ 2769], 95.00th=[ 3949], 00:08:59.271 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7767], 99.95th=[ 8586], 00:08:59.271 | 99.99th=[11207] 00:08:59.271 bw ( KiB/s): min=92928, max=99328, per=99.26%, avg=95789.33, stdev=3253.32, samples=3 00:08:59.271 iops : min=23232, max=24832, avg=23947.33, stdev=813.33, samples=3 00:08:59.271 lat (usec) : 500=0.01%, 1000=0.03% 00:08:59.271 lat (msec) : 2=2.11%, 4=92.83%, 10=5.00%, 20=0.03% 00:08:59.271 cpu : usr=99.35%, sys=0.00%, ctx=7, majf=0, minf=624 00:08:59.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:59.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:59.271 issued rwts: total=48574,48276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:59.271 00:08:59.271 Run status group 0 (all jobs): 00:08:59.271 READ: bw=94.8MiB/s (99.4MB/s), 94.8MiB/s-94.8MiB/s (99.4MB/s-99.4MB/s), io=190MiB (199MB), run=2001-2001msec 00:08:59.271 WRITE: bw=94.2MiB/s (98.8MB/s), 94.2MiB/s-94.2MiB/s (98.8MB/s-98.8MB/s), io=189MiB (198MB), run=2001-2001msec 00:08:59.271 ----------------------------------------------------- 00:08:59.271 Suppressions used: 00:08:59.271 count bytes template 00:08:59.271 1 32 /usr/src/fio/parse.c 00:08:59.271 1 8 libtcmalloc_minimal.so 00:08:59.271 ----------------------------------------------------- 00:08:59.271 00:08:59.271 10:29:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:59.271 10:29:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:59.271 10:29:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:59.271 10:29:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:59.271 10:29:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:59.271 10:29:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:59.271 10:29:33 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:59.271 10:29:33 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:59.271 10:29:33 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:59.271 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:59.271 fio-3.35 00:08:59.271 Starting 1 thread 00:09:04.532 00:09:04.532 test: (groupid=0, jobs=1): err= 0: pid=77516: Sat Sep 28 10:29:39 2024 00:09:04.532 read: IOPS=24.6k, BW=96.1MiB/s (101MB/s)(192MiB/2001msec) 00:09:04.532 slat (nsec): min=4210, max=55340, avg=4848.80, stdev=1814.59 00:09:04.532 clat (usec): min=211, max=10170, avg=2601.35, stdev=684.20 00:09:04.532 lat (usec): min=216, max=10207, avg=2606.20, stdev=685.33 00:09:04.532 clat percentiles (usec): 00:09:04.532 | 1.00th=[ 1745], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:04.532 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:09:04.532 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 2900], 95.00th=[ 4047], 00:09:04.532 | 99.00th=[ 5800], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 7373], 00:09:04.532 | 99.99th=[ 9896] 00:09:04.532 bw ( KiB/s): min=93472, max=102704, per=99.18%, avg=97584.00, stdev=4697.82, samples=3 00:09:04.532 iops : min=23368, max=25676, avg=24396.00, stdev=1174.45, samples=3 00:09:04.532 write: IOPS=24.4k, BW=95.5MiB/s (100MB/s)(191MiB/2001msec); 0 zone resets 00:09:04.532 slat (nsec): min=4322, max=55190, avg=5135.42, stdev=1789.30 00:09:04.532 clat (usec): min=227, max=10030, avg=2601.82, stdev=678.65 00:09:04.532 lat (usec): min=231, max=10040, avg=2606.96, stdev=679.74 00:09:04.532 clat percentiles (usec): 00:09:04.532 | 1.00th=[ 1745], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:04.532 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:09:04.532 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 2900], 95.00th=[ 4015], 00:09:04.532 | 99.00th=[ 5800], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 7570], 00:09:04.532 | 99.99th=[ 9634] 00:09:04.532 bw ( KiB/s): min=93648, max=102728, per=99.81%, avg=97562.67, stdev=4667.41, samples=3 00:09:04.532 iops : min=23412, max=25682, avg=24390.67, stdev=1166.85, samples=3 00:09:04.532 lat (usec) : 250=0.01%, 500=0.02%, 750=0.03%, 1000=0.03% 00:09:04.532 lat (msec) : 2=2.34%, 4=92.51%, 10=5.06%, 20=0.01% 00:09:04.532 cpu : usr=99.20%, sys=0.15%, ctx=11, majf=0, minf=624 00:09:04.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:04.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.532 issued rwts: total=49219,48899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.532 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.532 00:09:04.532 Run status group 0 (all jobs): 00:09:04.532 READ: bw=96.1MiB/s (101MB/s), 96.1MiB/s-96.1MiB/s (101MB/s-101MB/s), io=192MiB (202MB), run=2001-2001msec 00:09:04.532 WRITE: bw=95.5MiB/s (100MB/s), 95.5MiB/s-95.5MiB/s (100MB/s-100MB/s), io=191MiB (200MB), run=2001-2001msec 00:09:04.790 ----------------------------------------------------- 00:09:04.790 Suppressions used: 00:09:04.790 count bytes template 00:09:04.790 1 32 /usr/src/fio/parse.c 00:09:04.790 1 8 libtcmalloc_minimal.so 00:09:04.790 ----------------------------------------------------- 00:09:04.790 00:09:04.790 ************************************ 00:09:04.790 END TEST nvme_fio 00:09:04.790 ************************************ 00:09:04.790 10:29:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:04.790 10:29:39 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:04.790 00:09:04.790 real 0m26.118s 00:09:04.790 user 0m17.203s 00:09:04.790 sys 0m15.360s 00:09:04.790 10:29:39 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.790 10:29:39 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:04.791 ************************************ 00:09:04.791 END TEST nvme 00:09:04.791 ************************************ 00:09:04.791 00:09:04.791 real 1m33.222s 00:09:04.791 user 3m31.313s 00:09:04.791 sys 0m25.625s 00:09:04.791 10:29:39 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.791 10:29:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.791 10:29:39 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:04.791 10:29:39 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:04.791 10:29:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:04.791 10:29:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.791 10:29:39 -- common/autotest_common.sh@10 -- # set +x 00:09:04.791 ************************************ 00:09:04.791 START TEST nvme_scc 00:09:04.791 ************************************ 00:09:04.791 10:29:39 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:04.791 * Looking for test storage... 00:09:04.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:04.791 10:29:39 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:04.791 10:29:39 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:04.791 10:29:39 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:05.049 10:29:39 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.049 10:29:39 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:05.049 10:29:39 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.049 10:29:39 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:05.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.049 --rc genhtml_branch_coverage=1 00:09:05.049 --rc genhtml_function_coverage=1 00:09:05.049 --rc genhtml_legend=1 00:09:05.049 --rc geninfo_all_blocks=1 00:09:05.049 --rc geninfo_unexecuted_blocks=1 00:09:05.049 00:09:05.049 ' 00:09:05.049 10:29:39 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:05.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.049 --rc genhtml_branch_coverage=1 00:09:05.049 --rc genhtml_function_coverage=1 00:09:05.049 --rc genhtml_legend=1 00:09:05.049 --rc geninfo_all_blocks=1 00:09:05.049 --rc geninfo_unexecuted_blocks=1 00:09:05.049 00:09:05.049 ' 00:09:05.049 10:29:39 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:05.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.049 --rc genhtml_branch_coverage=1 00:09:05.049 --rc genhtml_function_coverage=1 00:09:05.049 --rc genhtml_legend=1 00:09:05.049 --rc geninfo_all_blocks=1 00:09:05.049 --rc geninfo_unexecuted_blocks=1 00:09:05.049 00:09:05.049 ' 00:09:05.049 10:29:39 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:05.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.049 --rc genhtml_branch_coverage=1 00:09:05.049 --rc genhtml_function_coverage=1 00:09:05.049 --rc genhtml_legend=1 00:09:05.049 --rc geninfo_all_blocks=1 00:09:05.049 --rc geninfo_unexecuted_blocks=1 00:09:05.049 00:09:05.049 ' 00:09:05.049 10:29:39 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:05.049 10:29:39 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:05.049 10:29:39 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:05.049 10:29:39 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:05.050 10:29:39 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.050 10:29:39 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.050 10:29:39 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.050 10:29:39 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.050 10:29:39 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.050 10:29:39 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.050 10:29:39 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.050 10:29:39 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.050 10:29:39 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:05.050 10:29:39 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.050 10:29:39 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:05.050 10:29:39 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:05.050 10:29:39 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:05.050 10:29:39 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:05.050 10:29:39 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:05.050 10:29:39 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:05.050 10:29:39 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:05.050 10:29:39 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:05.050 10:29:39 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:05.050 10:29:39 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.050 10:29:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:05.050 10:29:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:05.050 10:29:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:05.050 10:29:39 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:05.307 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:05.307 Waiting for block devices as requested 00:09:05.307 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:05.565 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:05.565 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:05.565 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.833 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:10.833 10:29:45 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:10.833 10:29:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:10.833 10:29:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:10.833 10:29:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:10.833 10:29:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.833 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.834 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.835 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:10.836 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:10.837 10:29:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:10.837 10:29:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:10.837 10:29:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:10.838 10:29:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:10.838 10:29:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.838 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:10.839 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.840 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.841 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:10.842 10:29:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:10.842 10:29:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:10.842 10:29:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:10.842 10:29:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:10.842 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:10.843 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.844 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:10.845 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:10.846 10:29:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.847 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.848 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:10.849 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:10.850 10:29:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:10.850 10:29:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:10.850 10:29:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:10.850 10:29:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:10.850 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:10.851 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:11.110 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:11.111 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:11.112 10:29:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:11.112 10:29:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:11.113 10:29:45 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:11.113 10:29:45 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:11.113 10:29:45 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:11.113 10:29:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:11.113 10:29:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:11.113 10:29:45 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:11.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:11.936 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.936 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.936 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.936 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.936 10:29:46 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:11.936 10:29:46 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:11.936 10:29:46 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.936 10:29:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:11.936 ************************************ 00:09:11.936 START TEST nvme_simple_copy 00:09:11.936 ************************************ 00:09:11.936 10:29:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:12.194 Initializing NVMe Controllers 00:09:12.194 Attaching to 0000:00:10.0 00:09:12.194 Controller supports SCC. Attached to 0000:00:10.0 00:09:12.194 Namespace ID: 1 size: 6GB 00:09:12.194 Initialization complete. 00:09:12.194 00:09:12.194 Controller QEMU NVMe Ctrl (12340 ) 00:09:12.194 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:12.194 Namespace Block Size:4096 00:09:12.194 Writing LBAs 0 to 63 with Random Data 00:09:12.194 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:12.194 LBAs matching Written Data: 64 00:09:12.195 ************************************ 00:09:12.195 END TEST nvme_simple_copy 00:09:12.195 ************************************ 00:09:12.195 00:09:12.195 real 0m0.236s 00:09:12.195 user 0m0.068s 00:09:12.195 sys 0m0.066s 00:09:12.195 10:29:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.195 10:29:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:12.195 ************************************ 00:09:12.195 END TEST nvme_scc 00:09:12.195 ************************************ 00:09:12.195 00:09:12.195 real 0m7.438s 00:09:12.195 user 0m1.023s 00:09:12.195 sys 0m1.316s 00:09:12.195 10:29:46 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.195 10:29:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:12.195 10:29:46 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:12.195 10:29:46 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:12.195 10:29:46 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:12.195 10:29:46 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:12.195 10:29:46 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:12.195 10:29:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:12.195 10:29:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.195 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:09:12.195 ************************************ 00:09:12.195 START TEST nvme_fdp 00:09:12.195 ************************************ 00:09:12.195 10:29:46 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:12.454 * Looking for test storage... 00:09:12.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:12.454 10:29:46 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:12.454 10:29:46 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:12.454 10:29:46 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:12.454 10:29:47 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:12.454 10:29:47 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.454 10:29:47 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.454 --rc genhtml_branch_coverage=1 00:09:12.454 --rc genhtml_function_coverage=1 00:09:12.454 --rc genhtml_legend=1 00:09:12.454 --rc geninfo_all_blocks=1 00:09:12.454 --rc geninfo_unexecuted_blocks=1 00:09:12.454 00:09:12.454 ' 00:09:12.454 10:29:47 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.454 --rc genhtml_branch_coverage=1 00:09:12.454 --rc genhtml_function_coverage=1 00:09:12.454 --rc genhtml_legend=1 00:09:12.454 --rc geninfo_all_blocks=1 00:09:12.454 --rc geninfo_unexecuted_blocks=1 00:09:12.454 00:09:12.454 ' 00:09:12.454 10:29:47 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.454 --rc genhtml_branch_coverage=1 00:09:12.454 --rc genhtml_function_coverage=1 00:09:12.454 --rc genhtml_legend=1 00:09:12.454 --rc geninfo_all_blocks=1 00:09:12.454 --rc geninfo_unexecuted_blocks=1 00:09:12.454 00:09:12.454 ' 00:09:12.454 10:29:47 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.454 --rc genhtml_branch_coverage=1 00:09:12.454 --rc genhtml_function_coverage=1 00:09:12.454 --rc genhtml_legend=1 00:09:12.454 --rc geninfo_all_blocks=1 00:09:12.454 --rc geninfo_unexecuted_blocks=1 00:09:12.454 00:09:12.454 ' 00:09:12.454 10:29:47 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.454 10:29:47 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.454 10:29:47 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.454 10:29:47 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.454 10:29:47 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.454 10:29:47 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:12.454 10:29:47 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:12.454 10:29:47 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:12.454 10:29:47 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.454 10:29:47 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:12.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:12.713 Waiting for block devices as requested 00:09:12.971 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.971 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.971 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.971 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.243 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:18.243 10:29:52 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:18.243 10:29:52 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:18.243 10:29:52 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:18.243 10:29:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.243 10:29:52 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:18.243 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:18.244 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:18.245 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:18.246 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.247 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:18.248 10:29:52 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:18.248 10:29:52 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:18.248 10:29:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.248 10:29:52 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.248 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.249 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.250 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.251 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:18.252 10:29:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:18.252 10:29:52 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:18.252 10:29:52 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:18.252 10:29:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.253 10:29:52 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:18.253 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.254 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.255 10:29:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.256 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:18.257 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.258 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:18.259 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:18.259 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.259 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.259 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.259 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:18.260 10:29:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:18.260 10:29:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:18.260 10:29:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.260 10:29:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.260 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.525 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:18.526 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.527 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:18.528 10:29:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:18.528 10:29:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:18.529 10:29:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:18.529 10:29:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:18.529 10:29:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:18.529 10:29:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:18.529 10:29:53 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:18.529 10:29:53 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:18.529 10:29:53 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:18.529 10:29:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:18.529 10:29:53 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:18.529 10:29:53 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:18.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.360 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.360 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.360 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.360 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.360 10:29:54 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:19.360 10:29:54 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:19.360 10:29:54 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.360 10:29:54 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:19.360 ************************************ 00:09:19.360 START TEST nvme_flexible_data_placement 00:09:19.360 ************************************ 00:09:19.360 10:29:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:19.618 Initializing NVMe Controllers 00:09:19.618 Attaching to 0000:00:13.0 00:09:19.618 Controller supports FDP Attached to 0000:00:13.0 00:09:19.618 Namespace ID: 1 Endurance Group ID: 1 00:09:19.618 Initialization complete. 00:09:19.618 00:09:19.618 ================================== 00:09:19.618 == FDP tests for Namespace: #01 == 00:09:19.618 ================================== 00:09:19.618 00:09:19.618 Get Feature: FDP: 00:09:19.618 ================= 00:09:19.618 Enabled: Yes 00:09:19.618 FDP configuration Index: 0 00:09:19.618 00:09:19.618 FDP configurations log page 00:09:19.618 =========================== 00:09:19.618 Number of FDP configurations: 1 00:09:19.618 Version: 0 00:09:19.618 Size: 112 00:09:19.618 FDP Configuration Descriptor: 0 00:09:19.618 Descriptor Size: 96 00:09:19.618 Reclaim Group Identifier format: 2 00:09:19.618 FDP Volatile Write Cache: Not Present 00:09:19.618 FDP Configuration: Valid 00:09:19.618 Vendor Specific Size: 0 00:09:19.618 Number of Reclaim Groups: 2 00:09:19.618 Number of Recalim Unit Handles: 8 00:09:19.618 Max Placement Identifiers: 128 00:09:19.618 Number of Namespaces Suppprted: 256 00:09:19.618 Reclaim unit Nominal Size: 6000000 bytes 00:09:19.618 Estimated Reclaim Unit Time Limit: Not Reported 00:09:19.618 RUH Desc #000: RUH Type: Initially Isolated 00:09:19.618 RUH Desc #001: RUH Type: Initially Isolated 00:09:19.618 RUH Desc #002: RUH Type: Initially Isolated 00:09:19.618 RUH Desc #003: RUH Type: Initially Isolated 00:09:19.618 RUH Desc #004: RUH Type: Initially Isolated 00:09:19.618 RUH Desc #005: RUH Type: Initially Isolated 00:09:19.618 RUH Desc #006: RUH Type: Initially Isolated 00:09:19.618 RUH Desc #007: RUH Type: Initially Isolated 00:09:19.618 00:09:19.618 FDP reclaim unit handle usage log page 00:09:19.618 ====================================== 00:09:19.618 Number of Reclaim Unit Handles: 8 00:09:19.618 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:19.618 RUH Usage Desc #001: RUH Attributes: Unused 00:09:19.618 RUH Usage Desc #002: RUH Attributes: Unused 00:09:19.618 RUH Usage Desc #003: RUH Attributes: Unused 00:09:19.618 RUH Usage Desc #004: RUH Attributes: Unused 00:09:19.618 RUH Usage Desc #005: RUH Attributes: Unused 00:09:19.618 RUH Usage Desc #006: RUH Attributes: Unused 00:09:19.618 RUH Usage Desc #007: RUH Attributes: Unused 00:09:19.618 00:09:19.618 FDP statistics log page 00:09:19.618 ======================= 00:09:19.618 Host bytes with metadata written: 2037178368 00:09:19.618 Media bytes with metadata written: 2038272000 00:09:19.618 Media bytes erased: 0 00:09:19.618 00:09:19.618 FDP Reclaim unit handle status 00:09:19.618 ============================== 00:09:19.618 Number of RUHS descriptors: 2 00:09:19.618 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004932 00:09:19.618 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:19.618 00:09:19.618 FDP write on placement id: 0 success 00:09:19.618 00:09:19.618 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:19.618 00:09:19.618 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:19.618 00:09:19.618 Get Feature: FDP Events for Placement handle: #0 00:09:19.618 ======================== 00:09:19.618 Number of FDP Events: 6 00:09:19.618 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:19.618 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:19.618 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:19.618 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:19.618 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:19.618 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:19.618 00:09:19.619 FDP events log page 00:09:19.619 =================== 00:09:19.619 Number of FDP events: 1 00:09:19.619 FDP Event #0: 00:09:19.619 Event Type: RU Not Written to Capacity 00:09:19.619 Placement Identifier: Valid 00:09:19.619 NSID: Valid 00:09:19.619 Location: Valid 00:09:19.619 Placement Identifier: 0 00:09:19.619 Event Timestamp: 4 00:09:19.619 Namespace Identifier: 1 00:09:19.619 Reclaim Group Identifier: 0 00:09:19.619 Reclaim Unit Handle Identifier: 0 00:09:19.619 00:09:19.619 FDP test passed 00:09:19.619 00:09:19.619 real 0m0.214s 00:09:19.619 user 0m0.056s 00:09:19.619 sys 0m0.057s 00:09:19.619 10:29:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.619 ************************************ 00:09:19.619 END TEST nvme_flexible_data_placement 00:09:19.619 ************************************ 00:09:19.619 10:29:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:19.619 ************************************ 00:09:19.619 END TEST nvme_fdp 00:09:19.619 ************************************ 00:09:19.619 00:09:19.619 real 0m7.339s 00:09:19.619 user 0m0.971s 00:09:19.619 sys 0m1.274s 00:09:19.619 10:29:54 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.619 10:29:54 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:19.619 10:29:54 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:19.619 10:29:54 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:19.619 10:29:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:19.619 10:29:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.619 10:29:54 -- common/autotest_common.sh@10 -- # set +x 00:09:19.619 ************************************ 00:09:19.619 START TEST nvme_rpc 00:09:19.619 ************************************ 00:09:19.619 10:29:54 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:19.619 * Looking for test storage... 00:09:19.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:19.619 10:29:54 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:19.619 10:29:54 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:19.619 10:29:54 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:19.877 10:29:54 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.877 10:29:54 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.878 10:29:54 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.878 10:29:54 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:19.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.878 --rc genhtml_branch_coverage=1 00:09:19.878 --rc genhtml_function_coverage=1 00:09:19.878 --rc genhtml_legend=1 00:09:19.878 --rc geninfo_all_blocks=1 00:09:19.878 --rc geninfo_unexecuted_blocks=1 00:09:19.878 00:09:19.878 ' 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:19.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.878 --rc genhtml_branch_coverage=1 00:09:19.878 --rc genhtml_function_coverage=1 00:09:19.878 --rc genhtml_legend=1 00:09:19.878 --rc geninfo_all_blocks=1 00:09:19.878 --rc geninfo_unexecuted_blocks=1 00:09:19.878 00:09:19.878 ' 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:19.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.878 --rc genhtml_branch_coverage=1 00:09:19.878 --rc genhtml_function_coverage=1 00:09:19.878 --rc genhtml_legend=1 00:09:19.878 --rc geninfo_all_blocks=1 00:09:19.878 --rc geninfo_unexecuted_blocks=1 00:09:19.878 00:09:19.878 ' 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:19.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.878 --rc genhtml_branch_coverage=1 00:09:19.878 --rc genhtml_function_coverage=1 00:09:19.878 --rc genhtml_legend=1 00:09:19.878 --rc geninfo_all_blocks=1 00:09:19.878 --rc geninfo_unexecuted_blocks=1 00:09:19.878 00:09:19.878 ' 00:09:19.878 10:29:54 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.878 10:29:54 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:19.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.878 10:29:54 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:19.878 10:29:54 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78872 00:09:19.878 10:29:54 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:19.878 10:29:54 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78872 00:09:19.878 10:29:54 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 78872 ']' 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.878 10:29:54 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.878 [2024-09-28 10:29:54.562525] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:09:19.878 [2024-09-28 10:29:54.562635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78872 ] 00:09:20.136 [2024-09-28 10:29:54.691129] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:20.136 [2024-09-28 10:29:54.712647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.136 [2024-09-28 10:29:54.744897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.136 [2024-09-28 10:29:54.744994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.701 10:29:55 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.701 10:29:55 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:20.701 10:29:55 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:20.959 Nvme0n1 00:09:20.959 10:29:55 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:20.959 10:29:55 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:21.217 request: 00:09:21.217 { 00:09:21.217 "bdev_name": "Nvme0n1", 00:09:21.217 "filename": "non_existing_file", 00:09:21.217 "method": "bdev_nvme_apply_firmware", 00:09:21.217 "req_id": 1 00:09:21.217 } 00:09:21.217 Got JSON-RPC error response 00:09:21.217 response: 00:09:21.217 { 00:09:21.217 "code": -32603, 00:09:21.217 "message": "open file failed." 00:09:21.217 } 00:09:21.217 10:29:55 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:21.217 10:29:55 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:21.217 10:29:55 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:21.475 10:29:56 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:21.475 10:29:56 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78872 00:09:21.475 10:29:56 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 78872 ']' 00:09:21.475 10:29:56 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 78872 00:09:21.475 10:29:56 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:21.475 10:29:56 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.476 10:29:56 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78872 00:09:21.476 killing process with pid 78872 00:09:21.476 10:29:56 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.476 10:29:56 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.476 10:29:56 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78872' 00:09:21.476 10:29:56 nvme_rpc -- common/autotest_common.sh@969 -- # kill 78872 00:09:21.476 10:29:56 nvme_rpc -- common/autotest_common.sh@974 -- # wait 78872 00:09:21.733 00:09:21.733 real 0m2.038s 00:09:21.733 user 0m4.006s 00:09:21.733 sys 0m0.435s 00:09:21.733 10:29:56 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.733 10:29:56 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.733 ************************************ 00:09:21.733 END TEST nvme_rpc 00:09:21.733 ************************************ 00:09:21.733 10:29:56 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:21.733 10:29:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.733 10:29:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.733 10:29:56 -- common/autotest_common.sh@10 -- # set +x 00:09:21.734 ************************************ 00:09:21.734 START TEST nvme_rpc_timeouts 00:09:21.734 ************************************ 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:21.734 * Looking for test storage... 00:09:21.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.734 10:29:56 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:21.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.734 --rc genhtml_branch_coverage=1 00:09:21.734 --rc genhtml_function_coverage=1 00:09:21.734 --rc genhtml_legend=1 00:09:21.734 --rc geninfo_all_blocks=1 00:09:21.734 --rc geninfo_unexecuted_blocks=1 00:09:21.734 00:09:21.734 ' 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:21.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.734 --rc genhtml_branch_coverage=1 00:09:21.734 --rc genhtml_function_coverage=1 00:09:21.734 --rc genhtml_legend=1 00:09:21.734 --rc geninfo_all_blocks=1 00:09:21.734 --rc geninfo_unexecuted_blocks=1 00:09:21.734 00:09:21.734 ' 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:21.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.734 --rc genhtml_branch_coverage=1 00:09:21.734 --rc genhtml_function_coverage=1 00:09:21.734 --rc genhtml_legend=1 00:09:21.734 --rc geninfo_all_blocks=1 00:09:21.734 --rc geninfo_unexecuted_blocks=1 00:09:21.734 00:09:21.734 ' 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:21.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.734 --rc genhtml_branch_coverage=1 00:09:21.734 --rc genhtml_function_coverage=1 00:09:21.734 --rc genhtml_legend=1 00:09:21.734 --rc geninfo_all_blocks=1 00:09:21.734 --rc geninfo_unexecuted_blocks=1 00:09:21.734 00:09:21.734 ' 00:09:21.734 10:29:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.734 10:29:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78920 00:09:21.734 10:29:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78920 00:09:21.734 10:29:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78958 00:09:21.734 10:29:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:21.734 10:29:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78958 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 78958 ']' 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.734 10:29:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:21.734 10:29:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:21.992 [2024-09-28 10:29:56.572893] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:09:21.992 [2024-09-28 10:29:56.573027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78958 ] 00:09:21.992 [2024-09-28 10:29:56.700543] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:21.992 [2024-09-28 10:29:56.716604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.992 [2024-09-28 10:29:56.748263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.992 [2024-09-28 10:29:56.748300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.927 10:29:57 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.927 10:29:57 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:22.927 10:29:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:22.927 Checking default timeout settings: 00:09:22.927 10:29:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:23.186 Making settings changes with rpc: 00:09:23.186 10:29:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:23.186 10:29:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:23.186 Check default vs. modified settings: 00:09:23.186 10:29:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:23.186 10:29:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78920 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78920 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:23.752 Setting action_on_timeout is changed as expected. 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78920 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78920 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:23.752 Setting timeout_us is changed as expected. 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78920 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78920 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:23.752 Setting timeout_admin_us is changed as expected. 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78920 /tmp/settings_modified_78920 00:09:23.752 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78958 00:09:23.752 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 78958 ']' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 78958 00:09:23.752 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:23.752 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78958 00:09:23.752 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.752 killing process with pid 78958 00:09:23.752 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78958' 00:09:23.752 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 78958 00:09:23.752 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 78958 00:09:24.010 RPC TIMEOUT SETTING TEST PASSED. 00:09:24.010 10:29:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:24.010 00:09:24.010 real 0m2.190s 00:09:24.010 user 0m4.419s 00:09:24.010 sys 0m0.439s 00:09:24.010 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.010 10:29:58 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:24.010 ************************************ 00:09:24.010 END TEST nvme_rpc_timeouts 00:09:24.010 ************************************ 00:09:24.010 10:29:58 -- spdk/autotest.sh@239 -- # uname -s 00:09:24.010 10:29:58 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:24.010 10:29:58 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:24.010 10:29:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.010 10:29:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.010 10:29:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.010 ************************************ 00:09:24.010 START TEST sw_hotplug 00:09:24.010 ************************************ 00:09:24.010 10:29:58 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:24.010 * Looking for test storage... 00:09:24.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.010 10:29:58 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.010 10:29:58 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.010 10:29:58 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.010 10:29:58 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.010 10:29:58 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:24.010 10:29:58 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.010 10:29:58 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.010 --rc genhtml_branch_coverage=1 00:09:24.010 --rc genhtml_function_coverage=1 00:09:24.010 --rc genhtml_legend=1 00:09:24.010 --rc geninfo_all_blocks=1 00:09:24.010 --rc geninfo_unexecuted_blocks=1 00:09:24.010 00:09:24.010 ' 00:09:24.010 10:29:58 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.010 --rc genhtml_branch_coverage=1 00:09:24.010 --rc genhtml_function_coverage=1 00:09:24.010 --rc genhtml_legend=1 00:09:24.010 --rc geninfo_all_blocks=1 00:09:24.010 --rc geninfo_unexecuted_blocks=1 00:09:24.010 00:09:24.010 ' 00:09:24.010 10:29:58 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.010 --rc genhtml_branch_coverage=1 00:09:24.010 --rc genhtml_function_coverage=1 00:09:24.010 --rc genhtml_legend=1 00:09:24.010 --rc geninfo_all_blocks=1 00:09:24.010 --rc geninfo_unexecuted_blocks=1 00:09:24.010 00:09:24.010 ' 00:09:24.010 10:29:58 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.010 --rc genhtml_branch_coverage=1 00:09:24.010 --rc genhtml_function_coverage=1 00:09:24.010 --rc genhtml_legend=1 00:09:24.010 --rc geninfo_all_blocks=1 00:09:24.010 --rc geninfo_unexecuted_blocks=1 00:09:24.010 00:09:24.010 ' 00:09:24.010 10:29:58 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:24.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.528 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:24.528 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:24.528 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:24.528 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:24.528 10:29:59 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:24.528 10:29:59 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:24.528 10:29:59 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:24.528 10:29:59 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:24.528 10:29:59 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:24.528 10:29:59 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:24.528 10:29:59 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:24.528 10:29:59 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.045 Waiting for block devices as requested 00:09:25.045 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.045 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.045 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.303 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.565 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:30.565 10:30:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:30.565 10:30:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:30.565 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:30.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.565 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:30.823 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:31.081 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.081 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.081 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:31.081 10:30:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79800 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:31.339 10:30:05 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:31.339 10:30:05 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:31.339 10:30:05 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:31.339 10:30:05 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:31.339 10:30:05 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:31.339 10:30:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:31.339 Initializing NVMe Controllers 00:09:31.339 Attaching to 0000:00:10.0 00:09:31.339 Attaching to 0000:00:11.0 00:09:31.339 Attached to 0000:00:11.0 00:09:31.339 Attached to 0000:00:10.0 00:09:31.339 Initialization complete. Starting I/O... 00:09:31.339 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:31.339 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:31.339 00:09:32.712 QEMU NVMe Ctrl (12341 ): 3219 I/Os completed (+3219) 00:09:32.712 QEMU NVMe Ctrl (12340 ): 3270 I/Os completed (+3270) 00:09:32.712 00:09:33.645 QEMU NVMe Ctrl (12341 ): 7431 I/Os completed (+4212) 00:09:33.645 QEMU NVMe Ctrl (12340 ): 7721 I/Os completed (+4451) 00:09:33.645 00:09:34.637 QEMU NVMe Ctrl (12341 ): 11350 I/Os completed (+3919) 00:09:34.637 QEMU NVMe Ctrl (12340 ): 11680 I/Os completed (+3959) 00:09:34.637 00:09:35.572 QEMU NVMe Ctrl (12341 ): 15435 I/Os completed (+4085) 00:09:35.572 QEMU NVMe Ctrl (12340 ): 15741 I/Os completed (+4061) 00:09:35.572 00:09:36.506 QEMU NVMe Ctrl (12341 ): 19694 I/Os completed (+4259) 00:09:36.506 QEMU NVMe Ctrl (12340 ): 19982 I/Os completed (+4241) 00:09:36.506 00:09:37.439 10:30:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:37.439 10:30:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:37.439 10:30:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:37.439 [2024-09-28 10:30:11.908514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:37.439 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:37.439 [2024-09-28 10:30:11.909973] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.910073] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.910102] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.910151] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:37.439 [2024-09-28 10:30:11.911149] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.911240] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.911267] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.911314] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 10:30:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:37.439 10:30:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:37.439 [2024-09-28 10:30:11.931071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:37.439 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:37.439 [2024-09-28 10:30:11.931846] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.931892] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.931915] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.931939] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:37.439 [2024-09-28 10:30:11.932851] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.932897] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.932923] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 [2024-09-28 10:30:11.932991] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.439 10:30:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:37.439 10:30:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:37.440 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:37.440 EAL: Scan for (pci) bus failed. 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:37.440 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:37.440 Attaching to 0000:00:10.0 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:37.440 Attached to 0000:00:10.0 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:37.440 10:30:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:37.440 Attaching to 0000:00:11.0 00:09:37.440 Attached to 0000:00:11.0 00:09:38.374 QEMU NVMe Ctrl (12340 ): 4258 I/Os completed (+4258) 00:09:38.374 QEMU NVMe Ctrl (12341 ): 3876 I/Os completed (+3876) 00:09:38.374 00:09:39.308 QEMU NVMe Ctrl (12340 ): 8528 I/Os completed (+4270) 00:09:39.308 QEMU NVMe Ctrl (12341 ): 8139 I/Os completed (+4263) 00:09:39.308 00:09:40.681 QEMU NVMe Ctrl (12340 ): 12805 I/Os completed (+4277) 00:09:40.681 QEMU NVMe Ctrl (12341 ): 12396 I/Os completed (+4257) 00:09:40.681 00:09:41.620 QEMU NVMe Ctrl (12340 ): 17046 I/Os completed (+4241) 00:09:41.620 QEMU NVMe Ctrl (12341 ): 16611 I/Os completed (+4215) 00:09:41.620 00:09:42.554 QEMU NVMe Ctrl (12340 ): 21315 I/Os completed (+4269) 00:09:42.554 QEMU NVMe Ctrl (12341 ): 20849 I/Os completed (+4238) 00:09:42.554 00:09:43.487 QEMU NVMe Ctrl (12340 ): 25533 I/Os completed (+4218) 00:09:43.487 QEMU NVMe Ctrl (12341 ): 25046 I/Os completed (+4197) 00:09:43.487 00:09:44.426 QEMU NVMe Ctrl (12340 ): 29507 I/Os completed (+3974) 00:09:44.426 QEMU NVMe Ctrl (12341 ): 29077 I/Os completed (+4031) 00:09:44.426 00:09:45.373 QEMU NVMe Ctrl (12340 ): 33107 I/Os completed (+3600) 00:09:45.373 QEMU NVMe Ctrl (12341 ): 32689 I/Os completed (+3612) 00:09:45.373 00:09:46.315 QEMU NVMe Ctrl (12340 ): 36636 I/Os completed (+3529) 00:09:46.315 QEMU NVMe Ctrl (12341 ): 36258 I/Os completed (+3569) 00:09:46.315 00:09:47.734 QEMU NVMe Ctrl (12340 ): 40368 I/Os completed (+3732) 00:09:47.734 QEMU NVMe Ctrl (12341 ): 40003 I/Os completed (+3745) 00:09:47.734 00:09:48.679 QEMU NVMe Ctrl (12340 ): 43940 I/Os completed (+3572) 00:09:48.679 QEMU NVMe Ctrl (12341 ): 43575 I/Os completed (+3572) 00:09:48.679 00:09:49.624 QEMU NVMe Ctrl (12340 ): 47524 I/Os completed (+3584) 00:09:49.624 QEMU NVMe Ctrl (12341 ): 47164 I/Os completed (+3589) 00:09:49.624 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:49.624 [2024-09-28 10:30:24.177584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:49.624 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:49.624 [2024-09-28 10:30:24.178990] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.179108] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.179143] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.179464] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:49.624 [2024-09-28 10:30:24.180940] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.180997] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.181016] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.181029] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:49.624 [2024-09-28 10:30:24.199861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:49.624 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:49.624 [2024-09-28 10:30:24.200927] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.201062] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.201124] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.201155] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:49.624 [2024-09-28 10:30:24.202265] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.202412] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.202432] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 [2024-09-28 10:30:24.202447] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:49.624 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:49.624 EAL: Scan for (pci) bus failed. 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:49.624 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:49.883 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:49.883 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:49.883 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:49.883 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:49.883 Attaching to 0000:00:10.0 00:09:49.883 Attached to 0000:00:10.0 00:09:49.883 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:49.883 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:49.883 10:30:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:49.883 Attaching to 0000:00:11.0 00:09:49.883 Attached to 0000:00:11.0 00:09:50.450 QEMU NVMe Ctrl (12340 ): 2883 I/Os completed (+2883) 00:09:50.450 QEMU NVMe Ctrl (12341 ): 2532 I/Os completed (+2532) 00:09:50.450 00:09:51.391 QEMU NVMe Ctrl (12340 ): 6677 I/Os completed (+3794) 00:09:51.391 QEMU NVMe Ctrl (12341 ): 6330 I/Os completed (+3798) 00:09:51.391 00:09:52.333 QEMU NVMe Ctrl (12340 ): 10373 I/Os completed (+3696) 00:09:52.333 QEMU NVMe Ctrl (12341 ): 10044 I/Os completed (+3714) 00:09:52.333 00:09:53.718 QEMU NVMe Ctrl (12340 ): 14024 I/Os completed (+3651) 00:09:53.718 QEMU NVMe Ctrl (12341 ): 13713 I/Os completed (+3669) 00:09:53.718 00:09:54.660 QEMU NVMe Ctrl (12340 ): 17748 I/Os completed (+3724) 00:09:54.660 QEMU NVMe Ctrl (12341 ): 17451 I/Os completed (+3738) 00:09:54.660 00:09:55.599 QEMU NVMe Ctrl (12340 ): 21530 I/Os completed (+3782) 00:09:55.599 QEMU NVMe Ctrl (12341 ): 21290 I/Os completed (+3839) 00:09:55.599 00:09:56.543 QEMU NVMe Ctrl (12340 ): 25118 I/Os completed (+3588) 00:09:56.543 QEMU NVMe Ctrl (12341 ): 24882 I/Os completed (+3592) 00:09:56.543 00:09:57.482 QEMU NVMe Ctrl (12340 ): 28762 I/Os completed (+3644) 00:09:57.482 QEMU NVMe Ctrl (12341 ): 28533 I/Os completed (+3651) 00:09:57.482 00:09:58.421 QEMU NVMe Ctrl (12340 ): 32518 I/Os completed (+3756) 00:09:58.421 QEMU NVMe Ctrl (12341 ): 32323 I/Os completed (+3790) 00:09:58.421 00:09:59.362 QEMU NVMe Ctrl (12340 ): 36386 I/Os completed (+3868) 00:09:59.362 QEMU NVMe Ctrl (12341 ): 36197 I/Os completed (+3874) 00:09:59.362 00:10:00.335 QEMU NVMe Ctrl (12340 ): 40700 I/Os completed (+4314) 00:10:00.335 QEMU NVMe Ctrl (12341 ): 40515 I/Os completed (+4318) 00:10:00.336 00:10:01.715 QEMU NVMe Ctrl (12340 ): 45093 I/Os completed (+4393) 00:10:01.715 QEMU NVMe Ctrl (12341 ): 44826 I/Os completed (+4311) 00:10:01.715 00:10:01.715 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:01.715 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:01.715 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:01.715 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:01.715 [2024-09-28 10:30:36.491125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:01.715 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:01.976 [2024-09-28 10:30:36.492201] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.492256] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.492288] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.492326] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:01.976 [2024-09-28 10:30:36.493559] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.493711] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.493782] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.493813] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:01.976 [2024-09-28 10:30:36.510523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:01.976 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:01.976 [2024-09-28 10:30:36.511511] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.511665] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.511703] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.511719] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:01.976 [2024-09-28 10:30:36.512842] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.512943] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.513024] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 [2024-09-28 10:30:36.513055] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.976 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:01.976 EAL: Scan for (pci) bus failed. 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:01.976 Attaching to 0000:00:10.0 00:10:01.976 Attached to 0000:00:10.0 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:01.976 10:30:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:01.976 Attaching to 0000:00:11.0 00:10:01.976 Attached to 0000:00:11.0 00:10:02.236 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:02.236 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:02.236 [2024-09-28 10:30:36.754269] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:14.478 10:30:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:14.479 10:30:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:14.479 10:30:48 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.85 00:10:14.479 10:30:48 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.85 00:10:14.479 10:30:48 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:14.479 10:30:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.85 00:10:14.479 10:30:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.85 2 00:10:14.479 remove_attach_helper took 42.85s to complete (handling 2 nvme drive(s)) 10:30:48 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:21.067 10:30:54 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79800 00:10:21.067 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79800) - No such process 00:10:21.067 10:30:54 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79800 00:10:21.067 10:30:54 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:21.067 10:30:54 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:21.067 10:30:54 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:21.067 10:30:54 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80348 00:10:21.067 10:30:54 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:21.067 10:30:54 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80348 00:10:21.067 10:30:54 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:21.067 10:30:54 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 80348 ']' 00:10:21.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.067 10:30:54 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.067 10:30:54 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.067 10:30:54 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.067 10:30:54 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.067 10:30:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.067 [2024-09-28 10:30:54.837445] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:10:21.067 [2024-09-28 10:30:54.837596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80348 ] 00:10:21.068 [2024-09-28 10:30:54.967628] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:21.068 [2024-09-28 10:30:54.986927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.068 [2024-09-28 10:30:55.019134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.068 10:30:55 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.068 10:30:55 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:21.068 10:30:55 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:21.068 10:30:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.068 10:30:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.068 10:30:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.068 10:30:55 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:21.068 10:30:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:21.068 10:30:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:21.068 10:30:55 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:21.068 10:30:55 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:21.068 10:30:55 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:21.068 10:30:55 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:21.068 10:30:55 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:21.068 10:30:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:21.068 10:30:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:21.068 10:30:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:21.068 10:30:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:21.068 10:30:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:27.666 10:31:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.666 10:31:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:27.666 10:31:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:27.666 10:31:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:27.666 [2024-09-28 10:31:01.766816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:27.666 [2024-09-28 10:31:01.767919] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.666 [2024-09-28 10:31:01.767956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.666 [2024-09-28 10:31:01.767976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.666 [2024-09-28 10:31:01.767992] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.666 [2024-09-28 10:31:01.768000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.666 [2024-09-28 10:31:01.768011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.666 [2024-09-28 10:31:01.768020] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.666 [2024-09-28 10:31:01.768028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.666 [2024-09-28 10:31:01.768035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.666 [2024-09-28 10:31:01.768043] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.666 [2024-09-28 10:31:01.768049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.666 [2024-09-28 10:31:01.768057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.666 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:27.666 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:27.666 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:27.666 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:27.666 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:27.666 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:27.666 10:31:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.666 10:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:27.666 [2024-09-28 10:31:02.266818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:27.666 [2024-09-28 10:31:02.267847] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.666 [2024-09-28 10:31:02.267877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.666 [2024-09-28 10:31:02.267888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.666 [2024-09-28 10:31:02.267901] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.666 [2024-09-28 10:31:02.267909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.666 [2024-09-28 10:31:02.267916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.666 [2024-09-28 10:31:02.267924] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.666 [2024-09-28 10:31:02.267931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.666 [2024-09-28 10:31:02.267940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.666 [2024-09-28 10:31:02.267946] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.666 [2024-09-28 10:31:02.267954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.666 [2024-09-28 10:31:02.267968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.666 10:31:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.666 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:27.666 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:28.233 10:31:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.233 10:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:28.233 10:31:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.233 10:31:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:28.492 10:31:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:28.492 10:31:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.492 10:31:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:40.706 10:31:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.706 10:31:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.706 10:31:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.706 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.707 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.707 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:40.707 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:40.707 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:40.707 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:40.707 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:40.707 10:31:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.707 10:31:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.707 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:40.707 10:31:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.707 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:40.707 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:40.707 [2024-09-28 10:31:15.167021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:40.707 [2024-09-28 10:31:15.168054] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.707 [2024-09-28 10:31:15.168164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.707 [2024-09-28 10:31:15.168180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.707 [2024-09-28 10:31:15.168194] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.707 [2024-09-28 10:31:15.168201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.707 [2024-09-28 10:31:15.168216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.707 [2024-09-28 10:31:15.168224] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.707 [2024-09-28 10:31:15.168232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.708 [2024-09-28 10:31:15.168255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.708 [2024-09-28 10:31:15.168263] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.708 [2024-09-28 10:31:15.168270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.708 [2024-09-28 10:31:15.168279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.971 [2024-09-28 10:31:15.567018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:40.971 [2024-09-28 10:31:15.568153] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.971 [2024-09-28 10:31:15.568181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.971 [2024-09-28 10:31:15.568191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.971 [2024-09-28 10:31:15.568201] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.971 [2024-09-28 10:31:15.568216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.971 [2024-09-28 10:31:15.568223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.971 [2024-09-28 10:31:15.568231] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.971 [2024-09-28 10:31:15.568237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.971 [2024-09-28 10:31:15.568245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.971 [2024-09-28 10:31:15.568250] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.971 [2024-09-28 10:31:15.568258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.971 [2024-09-28 10:31:15.568264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.971 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:40.971 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:40.971 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:40.971 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:40.971 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:40.971 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:40.971 10:31:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.971 10:31:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.971 10:31:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.971 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:40.971 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.230 10:31:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.432 10:31:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.432 10:31:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.432 10:31:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.432 10:31:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.432 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:53.432 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.432 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.432 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.432 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.432 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.432 10:31:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.432 10:31:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.432 10:31:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.432 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:53.432 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:53.432 [2024-09-28 10:31:28.067241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:53.432 [2024-09-28 10:31:28.068407] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.432 [2024-09-28 10:31:28.068509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.432 [2024-09-28 10:31:28.068568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.432 [2024-09-28 10:31:28.068688] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.432 [2024-09-28 10:31:28.068707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.432 [2024-09-28 10:31:28.068732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.432 [2024-09-28 10:31:28.068754] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.432 [2024-09-28 10:31:28.068772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.432 [2024-09-28 10:31:28.068834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.432 [2024-09-28 10:31:28.068998] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.432 [2024-09-28 10:31:28.069019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.432 [2024-09-28 10:31:28.069070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.999 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:53.999 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.999 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.999 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.999 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.999 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.999 10:31:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.999 10:31:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.999 [2024-09-28 10:31:28.567246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:53.999 [2024-09-28 10:31:28.568372] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.999 [2024-09-28 10:31:28.568472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.999 [2024-09-28 10:31:28.568535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.999 [2024-09-28 10:31:28.568586] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.999 [2024-09-28 10:31:28.568608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.999 [2024-09-28 10:31:28.568660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.999 [2024-09-28 10:31:28.568735] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.999 [2024-09-28 10:31:28.568751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.999 [2024-09-28 10:31:28.568802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.999 [2024-09-28 10:31:28.568853] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.999 [2024-09-28 10:31:28.568870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.999 [2024-09-28 10:31:28.568893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.999 10:31:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.999 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:53.999 10:31:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.566 10:31:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.566 10:31:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.566 10:31:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.566 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.824 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.824 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.824 10:31:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.72 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.72 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:11:07.086 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:07.086 10:31:41 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:07.086 10:31:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.641 10:31:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.641 10:31:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.641 10:31:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:13.641 10:31:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:13.641 [2024-09-28 10:31:47.518482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:13.641 [2024-09-28 10:31:47.519263] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.641 [2024-09-28 10:31:47.519287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.641 [2024-09-28 10:31:47.519296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.641 [2024-09-28 10:31:47.519309] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.641 [2024-09-28 10:31:47.519316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.641 [2024-09-28 10:31:47.519324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.641 [2024-09-28 10:31:47.519330] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.641 [2024-09-28 10:31:47.519341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.641 [2024-09-28 10:31:47.519347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.641 [2024-09-28 10:31:47.519355] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.641 [2024-09-28 10:31:47.519362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.641 [2024-09-28 10:31:47.519369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.641 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:13.641 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.641 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.641 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.641 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.641 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.641 10:31:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.641 10:31:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.641 10:31:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.641 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:13.641 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:13.641 [2024-09-28 10:31:48.218484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:13.641 [2024-09-28 10:31:48.219462] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.641 [2024-09-28 10:31:48.219493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.641 [2024-09-28 10:31:48.219504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.641 [2024-09-28 10:31:48.219515] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.641 [2024-09-28 10:31:48.219523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.641 [2024-09-28 10:31:48.219529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.641 [2024-09-28 10:31:48.219537] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.641 [2024-09-28 10:31:48.219543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.641 [2024-09-28 10:31:48.219551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.641 [2024-09-28 10:31:48.219557] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.641 [2024-09-28 10:31:48.219566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.641 [2024-09-28 10:31:48.219572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.900 10:31:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.900 10:31:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.900 10:31:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:13.900 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:14.157 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:14.157 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.157 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.157 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.157 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:14.157 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:14.157 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.157 10:31:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.350 10:32:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.350 10:32:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.350 10:32:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.350 10:32:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.350 10:32:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.350 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.350 10:32:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.350 [2024-09-28 10:32:00.918707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:26.350 [2024-09-28 10:32:00.919663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.350 [2024-09-28 10:32:00.919693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.350 [2024-09-28 10:32:00.919704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.350 [2024-09-28 10:32:00.919717] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.350 [2024-09-28 10:32:00.919725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.350 [2024-09-28 10:32:00.919732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.350 [2024-09-28 10:32:00.919739] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.350 [2024-09-28 10:32:00.919747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.350 [2024-09-28 10:32:00.919754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.350 [2024-09-28 10:32:00.919761] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.350 [2024-09-28 10:32:00.919767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.351 [2024-09-28 10:32:00.919775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.351 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:26.351 10:32:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:26.916 [2024-09-28 10:32:01.418707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:26.916 [2024-09-28 10:32:01.419452] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.916 [2024-09-28 10:32:01.419482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.916 [2024-09-28 10:32:01.419493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.916 [2024-09-28 10:32:01.419504] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.916 [2024-09-28 10:32:01.419512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.916 [2024-09-28 10:32:01.419519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.916 [2024-09-28 10:32:01.419527] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.916 [2024-09-28 10:32:01.419534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.916 [2024-09-28 10:32:01.419541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.916 [2024-09-28 10:32:01.419548] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.916 [2024-09-28 10:32:01.419556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.916 [2024-09-28 10:32:01.419562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.916 10:32:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.916 10:32:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.916 10:32:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.916 10:32:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.110 10:32:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.110 10:32:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.110 10:32:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.110 10:32:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.110 10:32:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.110 10:32:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:39.110 10:32:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.110 [2024-09-28 10:32:13.818918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:39.110 [2024-09-28 10:32:13.819864] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.110 [2024-09-28 10:32:13.819894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.110 [2024-09-28 10:32:13.819905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.110 [2024-09-28 10:32:13.819920] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.110 [2024-09-28 10:32:13.819928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.110 [2024-09-28 10:32:13.819936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.111 [2024-09-28 10:32:13.819942] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.111 [2024-09-28 10:32:13.819951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.111 [2024-09-28 10:32:13.819957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.111 [2024-09-28 10:32:13.819975] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.111 [2024-09-28 10:32:13.819981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.111 [2024-09-28 10:32:13.819989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.675 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:39.675 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.675 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.675 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.675 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.675 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.675 10:32:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.675 10:32:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.675 10:32:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.675 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:39.675 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.675 [2024-09-28 10:32:14.418921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:39.675 [2024-09-28 10:32:14.421074] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.675 [2024-09-28 10:32:14.421100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.675 [2024-09-28 10:32:14.421111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.675 [2024-09-28 10:32:14.421123] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.675 [2024-09-28 10:32:14.421131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.675 [2024-09-28 10:32:14.421138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.675 [2024-09-28 10:32:14.421148] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.675 [2024-09-28 10:32:14.421155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.675 [2024-09-28 10:32:14.421163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.675 [2024-09-28 10:32:14.421170] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.675 [2024-09-28 10:32:14.421177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.675 [2024-09-28 10:32:14.421184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.240 10:32:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.240 10:32:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.240 10:32:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:40.240 10:32:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:40.240 10:32:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.240 10:32:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.240 10:32:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.240 10:32:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:40.544 10:32:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:40.544 10:32:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.544 10:32:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:52.767 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:52.767 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:52.767 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:52.767 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.767 10:32:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.767 10:32:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.767 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.767 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.767 10:32:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.767 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:52.768 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.69 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.69 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:52.768 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.69 00:11:52.768 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.69 2 00:11:52.768 remove_attach_helper took 45.69s to complete (handling 2 nvme drive(s)) 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:52.768 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80348 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 80348 ']' 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 80348 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80348 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.768 killing process with pid 80348 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80348' 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@969 -- # kill 80348 00:11:52.768 10:32:27 sw_hotplug -- common/autotest_common.sh@974 -- # wait 80348 00:11:52.768 10:32:27 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:53.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:53.601 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:53.601 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:53.602 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.602 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.602 00:11:53.602 real 2m29.746s 00:11:53.602 user 1m49.992s 00:11:53.602 sys 0m18.456s 00:11:53.602 ************************************ 00:11:53.602 END TEST sw_hotplug 00:11:53.602 ************************************ 00:11:53.602 10:32:28 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.602 10:32:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 10:32:28 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:53.863 10:32:28 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:53.863 10:32:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:53.863 10:32:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.863 10:32:28 -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 ************************************ 00:11:53.863 START TEST nvme_xnvme 00:11:53.863 ************************************ 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:53.863 * Looking for test storage... 00:11:53.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.863 --rc genhtml_branch_coverage=1 00:11:53.863 --rc genhtml_function_coverage=1 00:11:53.863 --rc genhtml_legend=1 00:11:53.863 --rc geninfo_all_blocks=1 00:11:53.863 --rc geninfo_unexecuted_blocks=1 00:11:53.863 00:11:53.863 ' 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.863 --rc genhtml_branch_coverage=1 00:11:53.863 --rc genhtml_function_coverage=1 00:11:53.863 --rc genhtml_legend=1 00:11:53.863 --rc geninfo_all_blocks=1 00:11:53.863 --rc geninfo_unexecuted_blocks=1 00:11:53.863 00:11:53.863 ' 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.863 --rc genhtml_branch_coverage=1 00:11:53.863 --rc genhtml_function_coverage=1 00:11:53.863 --rc genhtml_legend=1 00:11:53.863 --rc geninfo_all_blocks=1 00:11:53.863 --rc geninfo_unexecuted_blocks=1 00:11:53.863 00:11:53.863 ' 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.863 --rc genhtml_branch_coverage=1 00:11:53.863 --rc genhtml_function_coverage=1 00:11:53.863 --rc genhtml_legend=1 00:11:53.863 --rc geninfo_all_blocks=1 00:11:53.863 --rc geninfo_unexecuted_blocks=1 00:11:53.863 00:11:53.863 ' 00:11:53.863 10:32:28 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.863 10:32:28 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.863 10:32:28 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.863 10:32:28 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.863 10:32:28 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.863 10:32:28 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:53.863 10:32:28 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.863 10:32:28 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.863 10:32:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 ************************************ 00:11:53.864 START TEST xnvme_to_malloc_dd_copy 00:11:53.864 ************************************ 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:53.864 10:32:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:54.123 { 00:11:54.123 "subsystems": [ 00:11:54.123 { 00:11:54.123 "subsystem": "bdev", 00:11:54.123 "config": [ 00:11:54.123 { 00:11:54.123 "params": { 00:11:54.123 "block_size": 512, 00:11:54.123 "num_blocks": 2097152, 00:11:54.123 "name": "malloc0" 00:11:54.123 }, 00:11:54.123 "method": "bdev_malloc_create" 00:11:54.123 }, 00:11:54.123 { 00:11:54.123 "params": { 00:11:54.123 "io_mechanism": "libaio", 00:11:54.123 "filename": "/dev/nullb0", 00:11:54.123 "name": "null0" 00:11:54.123 }, 00:11:54.123 "method": "bdev_xnvme_create" 00:11:54.123 }, 00:11:54.123 { 00:11:54.123 "method": "bdev_wait_for_examine" 00:11:54.123 } 00:11:54.123 ] 00:11:54.123 } 00:11:54.123 ] 00:11:54.123 } 00:11:54.123 [2024-09-28 10:32:28.689769] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:11:54.123 [2024-09-28 10:32:28.689922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81745 ] 00:11:54.123 [2024-09-28 10:32:28.822221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:54.123 [2024-09-28 10:32:28.841953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.123 [2024-09-28 10:32:28.883368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.196  Copying: 307/1024 [MB] (307 MBps) Copying: 615/1024 [MB] (308 MBps) Copying: 923/1024 [MB] (308 MBps) Copying: 1024/1024 [MB] (average 307 MBps) 00:11:58.196 00:11:58.196 10:32:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:58.196 10:32:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:58.196 10:32:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:58.196 10:32:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:58.196 { 00:11:58.196 "subsystems": [ 00:11:58.196 { 00:11:58.196 "subsystem": "bdev", 00:11:58.196 "config": [ 00:11:58.196 { 00:11:58.196 "params": { 00:11:58.196 "block_size": 512, 00:11:58.196 "num_blocks": 2097152, 00:11:58.196 "name": "malloc0" 00:11:58.196 }, 00:11:58.196 "method": "bdev_malloc_create" 00:11:58.196 }, 00:11:58.196 { 00:11:58.196 "params": { 00:11:58.196 "io_mechanism": "libaio", 00:11:58.196 "filename": "/dev/nullb0", 00:11:58.196 "name": "null0" 00:11:58.196 }, 00:11:58.196 "method": "bdev_xnvme_create" 00:11:58.196 }, 00:11:58.196 { 00:11:58.196 "method": "bdev_wait_for_examine" 00:11:58.196 } 00:11:58.196 ] 00:11:58.196 } 00:11:58.196 ] 00:11:58.196 } 00:11:58.196 [2024-09-28 10:32:32.833723] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:11:58.196 [2024-09-28 10:32:32.833838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81799 ] 00:11:58.196 [2024-09-28 10:32:32.961427] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:58.456 [2024-09-28 10:32:32.975439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.456 [2024-09-28 10:32:33.014527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.219  Copying: 308/1024 [MB] (308 MBps) Copying: 617/1024 [MB] (309 MBps) Copying: 927/1024 [MB] (309 MBps) Copying: 1024/1024 [MB] (average 309 MBps) 00:12:02.219 00:12:02.219 10:32:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:02.219 10:32:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:02.219 10:32:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:02.219 10:32:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:02.219 10:32:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:02.219 10:32:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:02.219 { 00:12:02.219 "subsystems": [ 00:12:02.219 { 00:12:02.219 "subsystem": "bdev", 00:12:02.219 "config": [ 00:12:02.219 { 00:12:02.219 "params": { 00:12:02.219 "block_size": 512, 00:12:02.219 "num_blocks": 2097152, 00:12:02.219 "name": "malloc0" 00:12:02.219 }, 00:12:02.219 "method": "bdev_malloc_create" 00:12:02.219 }, 00:12:02.219 { 00:12:02.219 "params": { 00:12:02.219 "io_mechanism": "io_uring", 00:12:02.219 "filename": "/dev/nullb0", 00:12:02.219 "name": "null0" 00:12:02.219 }, 00:12:02.219 "method": "bdev_xnvme_create" 00:12:02.219 }, 00:12:02.219 { 00:12:02.219 "method": "bdev_wait_for_examine" 00:12:02.219 } 00:12:02.219 ] 00:12:02.219 } 00:12:02.219 ] 00:12:02.219 } 00:12:02.219 [2024-09-28 10:32:36.950164] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:02.219 [2024-09-28 10:32:36.950278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81853 ] 00:12:02.478 [2024-09-28 10:32:37.079116] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:02.478 [2024-09-28 10:32:37.098207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.478 [2024-09-28 10:32:37.127112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.242  Copying: 317/1024 [MB] (317 MBps) Copying: 635/1024 [MB] (317 MBps) Copying: 953/1024 [MB] (317 MBps) Copying: 1024/1024 [MB] (average 317 MBps) 00:12:06.242 00:12:06.242 10:32:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:06.242 10:32:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:06.242 10:32:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:06.242 10:32:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:06.242 { 00:12:06.242 "subsystems": [ 00:12:06.242 { 00:12:06.242 "subsystem": "bdev", 00:12:06.242 "config": [ 00:12:06.242 { 00:12:06.242 "params": { 00:12:06.242 "block_size": 512, 00:12:06.242 "num_blocks": 2097152, 00:12:06.242 "name": "malloc0" 00:12:06.242 }, 00:12:06.242 "method": "bdev_malloc_create" 00:12:06.242 }, 00:12:06.242 { 00:12:06.242 "params": { 00:12:06.242 "io_mechanism": "io_uring", 00:12:06.242 "filename": "/dev/nullb0", 00:12:06.242 "name": "null0" 00:12:06.242 }, 00:12:06.242 "method": "bdev_xnvme_create" 00:12:06.242 }, 00:12:06.242 { 00:12:06.242 "method": "bdev_wait_for_examine" 00:12:06.242 } 00:12:06.242 ] 00:12:06.242 } 00:12:06.242 ] 00:12:06.242 } 00:12:06.242 [2024-09-28 10:32:40.940751] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:06.242 [2024-09-28 10:32:40.941282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81907 ] 00:12:06.502 [2024-09-28 10:32:41.070075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:06.502 [2024-09-28 10:32:41.089930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.502 [2024-09-28 10:32:41.128321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.279  Copying: 321/1024 [MB] (321 MBps) Copying: 643/1024 [MB] (322 MBps) Copying: 965/1024 [MB] (321 MBps) Copying: 1024/1024 [MB] (average 321 MBps) 00:12:10.279 00:12:10.279 10:32:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:10.279 10:32:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:10.279 00:12:10.279 real 0m16.309s 00:12:10.279 user 0m13.548s 00:12:10.279 sys 0m2.264s 00:12:10.279 ************************************ 00:12:10.279 10:32:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.279 10:32:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:10.279 END TEST xnvme_to_malloc_dd_copy 00:12:10.279 ************************************ 00:12:10.279 10:32:44 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:10.279 10:32:44 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:10.279 10:32:44 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.279 10:32:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.279 ************************************ 00:12:10.279 START TEST xnvme_bdevperf 00:12:10.279 ************************************ 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:10.279 10:32:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:10.279 { 00:12:10.279 "subsystems": [ 00:12:10.279 { 00:12:10.279 "subsystem": "bdev", 00:12:10.279 "config": [ 00:12:10.279 { 00:12:10.279 "params": { 00:12:10.279 "io_mechanism": "libaio", 00:12:10.279 "filename": "/dev/nullb0", 00:12:10.279 "name": "null0" 00:12:10.280 }, 00:12:10.280 "method": "bdev_xnvme_create" 00:12:10.280 }, 00:12:10.280 { 00:12:10.280 "method": "bdev_wait_for_examine" 00:12:10.280 } 00:12:10.280 ] 00:12:10.280 } 00:12:10.280 ] 00:12:10.280 } 00:12:10.280 [2024-09-28 10:32:45.031435] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:10.280 [2024-09-28 10:32:45.031558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81984 ] 00:12:10.601 [2024-09-28 10:32:45.162062] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:10.601 [2024-09-28 10:32:45.182022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.601 [2024-09-28 10:32:45.223614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.601 Running I/O for 5 seconds... 00:12:15.744 209536.00 IOPS, 818.50 MiB/s 209792.00 IOPS, 819.50 MiB/s 209621.33 IOPS, 818.83 MiB/s 209808.00 IOPS, 819.56 MiB/s 00:12:15.744 Latency(us) 00:12:15.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.744 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:15.744 null0 : 5.00 209795.28 819.51 0.00 0.00 302.98 106.34 1512.37 00:12:15.744 =================================================================================================================== 00:12:15.744 Total : 209795.28 819.51 0.00 0.00 302.98 106.34 1512.37 00:12:15.744 10:32:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:15.745 10:32:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:15.745 10:32:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:15.745 10:32:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:15.745 10:32:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:15.745 10:32:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:15.745 { 00:12:15.745 "subsystems": [ 00:12:15.745 { 00:12:15.745 "subsystem": "bdev", 00:12:15.745 "config": [ 00:12:15.745 { 00:12:15.745 "params": { 00:12:15.745 "io_mechanism": "io_uring", 00:12:15.745 "filename": "/dev/nullb0", 00:12:15.745 "name": "null0" 00:12:15.745 }, 00:12:15.745 "method": "bdev_xnvme_create" 00:12:15.745 }, 00:12:15.745 { 00:12:15.745 "method": "bdev_wait_for_examine" 00:12:15.745 } 00:12:15.745 ] 00:12:15.745 } 00:12:15.745 ] 00:12:15.745 } 00:12:16.005 [2024-09-28 10:32:50.524942] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:16.005 [2024-09-28 10:32:50.525070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82047 ] 00:12:16.005 [2024-09-28 10:32:50.654191] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:16.005 [2024-09-28 10:32:50.674005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.005 [2024-09-28 10:32:50.710408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.005 Running I/O for 5 seconds... 00:12:21.122 238592.00 IOPS, 932.00 MiB/s 238496.00 IOPS, 931.62 MiB/s 238421.33 IOPS, 931.33 MiB/s 238416.00 IOPS, 931.31 MiB/s 00:12:21.122 Latency(us) 00:12:21.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.122 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:21.122 null0 : 5.00 238328.21 930.97 0.00 0.00 266.22 146.51 1487.16 00:12:21.122 =================================================================================================================== 00:12:21.122 Total : 238328.21 930.97 0.00 0.00 266.22 146.51 1487.16 00:12:21.381 10:32:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:21.381 10:32:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:21.381 00:12:21.381 real 0m11.004s 00:12:21.381 user 0m8.676s 00:12:21.381 sys 0m2.099s 00:12:21.381 10:32:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.381 ************************************ 00:12:21.381 END TEST xnvme_bdevperf 00:12:21.381 ************************************ 00:12:21.381 10:32:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:21.381 00:12:21.381 real 0m27.582s 00:12:21.381 user 0m22.346s 00:12:21.381 sys 0m4.474s 00:12:21.381 10:32:55 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.381 ************************************ 00:12:21.381 END TEST nvme_xnvme 00:12:21.381 ************************************ 00:12:21.381 10:32:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.381 10:32:56 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:21.381 10:32:56 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.381 10:32:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.381 10:32:56 -- common/autotest_common.sh@10 -- # set +x 00:12:21.381 ************************************ 00:12:21.381 START TEST blockdev_xnvme 00:12:21.381 ************************************ 00:12:21.381 10:32:56 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:21.381 * Looking for test storage... 00:12:21.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:21.381 10:32:56 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:21.382 10:32:56 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:21.382 10:32:56 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:21.642 10:32:56 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.642 10:32:56 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:21.642 10:32:56 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.642 10:32:56 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.642 --rc genhtml_branch_coverage=1 00:12:21.642 --rc genhtml_function_coverage=1 00:12:21.642 --rc genhtml_legend=1 00:12:21.642 --rc geninfo_all_blocks=1 00:12:21.642 --rc geninfo_unexecuted_blocks=1 00:12:21.642 00:12:21.642 ' 00:12:21.642 10:32:56 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.642 --rc genhtml_branch_coverage=1 00:12:21.642 --rc genhtml_function_coverage=1 00:12:21.642 --rc genhtml_legend=1 00:12:21.642 --rc geninfo_all_blocks=1 00:12:21.642 --rc geninfo_unexecuted_blocks=1 00:12:21.642 00:12:21.642 ' 00:12:21.642 10:32:56 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.642 --rc genhtml_branch_coverage=1 00:12:21.642 --rc genhtml_function_coverage=1 00:12:21.642 --rc genhtml_legend=1 00:12:21.642 --rc geninfo_all_blocks=1 00:12:21.642 --rc geninfo_unexecuted_blocks=1 00:12:21.642 00:12:21.642 ' 00:12:21.642 10:32:56 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.642 --rc genhtml_branch_coverage=1 00:12:21.643 --rc genhtml_function_coverage=1 00:12:21.643 --rc genhtml_legend=1 00:12:21.643 --rc geninfo_all_blocks=1 00:12:21.643 --rc geninfo_unexecuted_blocks=1 00:12:21.643 00:12:21.643 ' 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=82184 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 82184 00:12:21.643 10:32:56 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 82184 ']' 00:12:21.643 10:32:56 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.643 10:32:56 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.643 10:32:56 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.643 10:32:56 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.643 10:32:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.643 10:32:56 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:21.643 [2024-09-28 10:32:56.305502] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:21.643 [2024-09-28 10:32:56.305651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82184 ] 00:12:21.904 [2024-09-28 10:32:56.438300] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:21.904 [2024-09-28 10:32:56.457352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.904 [2024-09-28 10:32:56.508183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.477 10:32:57 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.477 10:32:57 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:22.477 10:32:57 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:22.477 10:32:57 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:22.477 10:32:57 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:22.477 10:32:57 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:22.477 10:32:57 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:22.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:23.001 Waiting for block devices as requested 00:12:23.002 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.002 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.263 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.263 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:28.540 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:28.540 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:28.540 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:28.541 nvme0n1 00:12:28.541 nvme1n1 00:12:28.541 nvme2n1 00:12:28.541 nvme2n2 00:12:28.541 nvme2n3 00:12:28.541 nvme3n1 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "430e6e22-3dc0-44d1-a103-11f8ae53cc03"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "430e6e22-3dc0-44d1-a103-11f8ae53cc03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "005cca2f-9ee9-404b-b385-a08b12015f2c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "005cca2f-9ee9-404b-b385-a08b12015f2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "52adf586-21d2-458e-9131-c06ea5ca21cc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52adf586-21d2-458e-9131-c06ea5ca21cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6ae3ccb9-f47a-44bd-b523-311aaa710091"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6ae3ccb9-f47a-44bd-b523-311aaa710091",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c96f0e9a-1012-4b3d-aba2-008835e3cbc5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c96f0e9a-1012-4b3d-aba2-008835e3cbc5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "37ff3147-e698-43e3-aac1-e5dd355fb5de"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "37ff3147-e698-43e3-aac1-e5dd355fb5de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:28.541 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 82184 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 82184 ']' 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 82184 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82184 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82184' 00:12:28.541 killing process with pid 82184 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 82184 00:12:28.541 10:33:03 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 82184 00:12:28.802 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:28.802 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:28.802 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:28.802 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.802 10:33:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.802 ************************************ 00:12:28.802 START TEST bdev_hello_world 00:12:28.802 ************************************ 00:12:28.802 10:33:03 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:28.802 [2024-09-28 10:33:03.491581] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:28.802 [2024-09-28 10:33:03.491670] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82532 ] 00:12:29.061 [2024-09-28 10:33:03.613083] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:29.061 [2024-09-28 10:33:03.634247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.061 [2024-09-28 10:33:03.662260] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.061 [2024-09-28 10:33:03.818309] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:29.061 [2024-09-28 10:33:03.818350] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:29.061 [2024-09-28 10:33:03.818363] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:29.061 [2024-09-28 10:33:03.819837] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:29.061 [2024-09-28 10:33:03.820143] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:29.061 [2024-09-28 10:33:03.820165] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:29.061 [2024-09-28 10:33:03.820386] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:29.061 00:12:29.061 [2024-09-28 10:33:03.820413] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:29.321 00:12:29.321 real 0m0.505s 00:12:29.321 user 0m0.271s 00:12:29.321 sys 0m0.127s 00:12:29.321 ************************************ 00:12:29.321 END TEST bdev_hello_world 00:12:29.321 10:33:03 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.321 10:33:03 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:29.321 ************************************ 00:12:29.321 10:33:03 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:29.321 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.321 10:33:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.321 10:33:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.321 ************************************ 00:12:29.321 START TEST bdev_bounds 00:12:29.321 ************************************ 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:29.321 Process bdevio pid: 82552 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=82552 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 82552' 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 82552 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 82552 ']' 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.321 10:33:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:29.321 [2024-09-28 10:33:04.066392] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:29.321 [2024-09-28 10:33:04.066505] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82552 ] 00:12:29.579 [2024-09-28 10:33:04.195389] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:29.579 [2024-09-28 10:33:04.211366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:29.579 [2024-09-28 10:33:04.240633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.579 [2024-09-28 10:33:04.240948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.579 [2024-09-28 10:33:04.241055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.143 10:33:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.143 10:33:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:30.143 10:33:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:30.400 I/O targets: 00:12:30.400 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:30.400 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:30.400 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:30.400 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:30.400 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:30.400 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:30.400 00:12:30.400 00:12:30.400 CUnit - A unit testing framework for C - Version 2.1-3 00:12:30.400 http://cunit.sourceforge.net/ 00:12:30.400 00:12:30.400 00:12:30.400 Suite: bdevio tests on: nvme3n1 00:12:30.400 Test: blockdev write read block ...passed 00:12:30.400 Test: blockdev write zeroes read block ...passed 00:12:30.400 Test: blockdev write zeroes read no split ...passed 00:12:30.400 Test: blockdev write zeroes read split ...passed 00:12:30.400 Test: blockdev write zeroes read split partial ...passed 00:12:30.400 Test: blockdev reset ...passed 00:12:30.400 Test: blockdev write read 8 blocks ...passed 00:12:30.400 Test: blockdev write read size > 128k ...passed 00:12:30.400 Test: blockdev write read invalid size ...passed 00:12:30.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:30.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:30.400 Test: blockdev write read max offset ...passed 00:12:30.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:30.400 Test: blockdev writev readv 8 blocks ...passed 00:12:30.400 Test: blockdev writev readv 30 x 1block ...passed 00:12:30.400 Test: blockdev writev readv block ...passed 00:12:30.400 Test: blockdev writev readv size > 128k ...passed 00:12:30.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:30.400 Test: blockdev comparev and writev ...passed 00:12:30.400 Test: blockdev nvme passthru rw ...passed 00:12:30.400 Test: blockdev nvme passthru vendor specific ...passed 00:12:30.400 Test: blockdev nvme admin passthru ...passed 00:12:30.400 Test: blockdev copy ...passed 00:12:30.400 Suite: bdevio tests on: nvme2n3 00:12:30.400 Test: blockdev write read block ...passed 00:12:30.400 Test: blockdev write zeroes read block ...passed 00:12:30.400 Test: blockdev write zeroes read no split ...passed 00:12:30.400 Test: blockdev write zeroes read split ...passed 00:12:30.400 Test: blockdev write zeroes read split partial ...passed 00:12:30.400 Test: blockdev reset ...passed 00:12:30.400 Test: blockdev write read 8 blocks ...passed 00:12:30.400 Test: blockdev write read size > 128k ...passed 00:12:30.400 Test: blockdev write read invalid size ...passed 00:12:30.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:30.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:30.400 Test: blockdev write read max offset ...passed 00:12:30.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:30.400 Test: blockdev writev readv 8 blocks ...passed 00:12:30.400 Test: blockdev writev readv 30 x 1block ...passed 00:12:30.400 Test: blockdev writev readv block ...passed 00:12:30.400 Test: blockdev writev readv size > 128k ...passed 00:12:30.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:30.400 Test: blockdev comparev and writev ...passed 00:12:30.400 Test: blockdev nvme passthru rw ...passed 00:12:30.400 Test: blockdev nvme passthru vendor specific ...passed 00:12:30.400 Test: blockdev nvme admin passthru ...passed 00:12:30.400 Test: blockdev copy ...passed 00:12:30.400 Suite: bdevio tests on: nvme2n2 00:12:30.400 Test: blockdev write read block ...passed 00:12:30.400 Test: blockdev write zeroes read block ...passed 00:12:30.400 Test: blockdev write zeroes read no split ...passed 00:12:30.400 Test: blockdev write zeroes read split ...passed 00:12:30.400 Test: blockdev write zeroes read split partial ...passed 00:12:30.400 Test: blockdev reset ...passed 00:12:30.400 Test: blockdev write read 8 blocks ...passed 00:12:30.400 Test: blockdev write read size > 128k ...passed 00:12:30.400 Test: blockdev write read invalid size ...passed 00:12:30.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:30.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:30.400 Test: blockdev write read max offset ...passed 00:12:30.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:30.400 Test: blockdev writev readv 8 blocks ...passed 00:12:30.400 Test: blockdev writev readv 30 x 1block ...passed 00:12:30.400 Test: blockdev writev readv block ...passed 00:12:30.400 Test: blockdev writev readv size > 128k ...passed 00:12:30.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:30.400 Test: blockdev comparev and writev ...passed 00:12:30.400 Test: blockdev nvme passthru rw ...passed 00:12:30.400 Test: blockdev nvme passthru vendor specific ...passed 00:12:30.400 Test: blockdev nvme admin passthru ...passed 00:12:30.400 Test: blockdev copy ...passed 00:12:30.400 Suite: bdevio tests on: nvme2n1 00:12:30.400 Test: blockdev write read block ...passed 00:12:30.400 Test: blockdev write zeroes read block ...passed 00:12:30.400 Test: blockdev write zeroes read no split ...passed 00:12:30.400 Test: blockdev write zeroes read split ...passed 00:12:30.400 Test: blockdev write zeroes read split partial ...passed 00:12:30.400 Test: blockdev reset ...passed 00:12:30.400 Test: blockdev write read 8 blocks ...passed 00:12:30.400 Test: blockdev write read size > 128k ...passed 00:12:30.400 Test: blockdev write read invalid size ...passed 00:12:30.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:30.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:30.400 Test: blockdev write read max offset ...passed 00:12:30.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:30.400 Test: blockdev writev readv 8 blocks ...passed 00:12:30.400 Test: blockdev writev readv 30 x 1block ...passed 00:12:30.400 Test: blockdev writev readv block ...passed 00:12:30.400 Test: blockdev writev readv size > 128k ...passed 00:12:30.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:30.400 Test: blockdev comparev and writev ...passed 00:12:30.400 Test: blockdev nvme passthru rw ...passed 00:12:30.400 Test: blockdev nvme passthru vendor specific ...passed 00:12:30.400 Test: blockdev nvme admin passthru ...passed 00:12:30.400 Test: blockdev copy ...passed 00:12:30.400 Suite: bdevio tests on: nvme1n1 00:12:30.400 Test: blockdev write read block ...passed 00:12:30.400 Test: blockdev write zeroes read block ...passed 00:12:30.400 Test: blockdev write zeroes read no split ...passed 00:12:30.400 Test: blockdev write zeroes read split ...passed 00:12:30.400 Test: blockdev write zeroes read split partial ...passed 00:12:30.400 Test: blockdev reset ...passed 00:12:30.400 Test: blockdev write read 8 blocks ...passed 00:12:30.400 Test: blockdev write read size > 128k ...passed 00:12:30.400 Test: blockdev write read invalid size ...passed 00:12:30.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:30.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:30.400 Test: blockdev write read max offset ...passed 00:12:30.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:30.400 Test: blockdev writev readv 8 blocks ...passed 00:12:30.400 Test: blockdev writev readv 30 x 1block ...passed 00:12:30.400 Test: blockdev writev readv block ...passed 00:12:30.400 Test: blockdev writev readv size > 128k ...passed 00:12:30.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:30.400 Test: blockdev comparev and writev ...passed 00:12:30.400 Test: blockdev nvme passthru rw ...passed 00:12:30.400 Test: blockdev nvme passthru vendor specific ...passed 00:12:30.400 Test: blockdev nvme admin passthru ...passed 00:12:30.400 Test: blockdev copy ...passed 00:12:30.400 Suite: bdevio tests on: nvme0n1 00:12:30.400 Test: blockdev write read block ...passed 00:12:30.400 Test: blockdev write zeroes read block ...passed 00:12:30.400 Test: blockdev write zeroes read no split ...passed 00:12:30.400 Test: blockdev write zeroes read split ...passed 00:12:30.400 Test: blockdev write zeroes read split partial ...passed 00:12:30.400 Test: blockdev reset ...passed 00:12:30.400 Test: blockdev write read 8 blocks ...passed 00:12:30.400 Test: blockdev write read size > 128k ...passed 00:12:30.400 Test: blockdev write read invalid size ...passed 00:12:30.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:30.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:30.400 Test: blockdev write read max offset ...passed 00:12:30.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:30.400 Test: blockdev writev readv 8 blocks ...passed 00:12:30.400 Test: blockdev writev readv 30 x 1block ...passed 00:12:30.400 Test: blockdev writev readv block ...passed 00:12:30.400 Test: blockdev writev readv size > 128k ...passed 00:12:30.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:30.400 Test: blockdev comparev and writev ...passed 00:12:30.400 Test: blockdev nvme passthru rw ...passed 00:12:30.400 Test: blockdev nvme passthru vendor specific ...passed 00:12:30.400 Test: blockdev nvme admin passthru ...passed 00:12:30.400 Test: blockdev copy ...passed 00:12:30.400 00:12:30.400 Run Summary: Type Total Ran Passed Failed Inactive 00:12:30.400 suites 6 6 n/a 0 0 00:12:30.400 tests 138 138 138 0 0 00:12:30.400 asserts 780 780 780 0 n/a 00:12:30.400 00:12:30.400 Elapsed time = 0.246 seconds 00:12:30.400 0 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 82552 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 82552 ']' 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 82552 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82552 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82552' 00:12:30.400 killing process with pid 82552 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 82552 00:12:30.400 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 82552 00:12:30.657 10:33:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:30.657 00:12:30.657 real 0m1.272s 00:12:30.657 user 0m3.265s 00:12:30.657 sys 0m0.243s 00:12:30.657 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.657 10:33:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:30.657 ************************************ 00:12:30.657 END TEST bdev_bounds 00:12:30.657 ************************************ 00:12:30.657 10:33:05 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:30.657 10:33:05 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:30.657 10:33:05 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.657 10:33:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.657 ************************************ 00:12:30.657 START TEST bdev_nbd 00:12:30.657 ************************************ 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:30.657 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:30.658 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=82602 00:12:30.658 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:30.658 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 82602 /var/tmp/spdk-nbd.sock 00:12:30.658 10:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:30.658 10:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 82602 ']' 00:12:30.658 10:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:30.658 10:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:30.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:30.658 10:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:30.658 10:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:30.658 10:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:30.658 [2024-09-28 10:33:05.406457] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:30.658 [2024-09-28 10:33:05.406574] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.914 [2024-09-28 10:33:05.534371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:30.914 [2024-09-28 10:33:05.550820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.914 [2024-09-28 10:33:05.579038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:31.478 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.735 1+0 records in 00:12:31.735 1+0 records out 00:12:31.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337794 s, 12.1 MB/s 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:31.735 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.993 1+0 records in 00:12:31.993 1+0 records out 00:12:31.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387392 s, 10.6 MB/s 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:31.993 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.251 1+0 records in 00:12:32.251 1+0 records out 00:12:32.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352467 s, 11.6 MB/s 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:32.251 10:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.509 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.509 1+0 records in 00:12:32.509 1+0 records out 00:12:32.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452829 s, 9.0 MB/s 00:12:32.510 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.510 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:32.510 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.510 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.510 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:32.510 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:32.510 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:32.510 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.768 1+0 records in 00:12:32.768 1+0 records out 00:12:32.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415777 s, 9.9 MB/s 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:32.768 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.026 1+0 records in 00:12:33.026 1+0 records out 00:12:33.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053185 s, 7.7 MB/s 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:33.026 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd0", 00:12:33.284 "bdev_name": "nvme0n1" 00:12:33.284 }, 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd1", 00:12:33.284 "bdev_name": "nvme1n1" 00:12:33.284 }, 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd2", 00:12:33.284 "bdev_name": "nvme2n1" 00:12:33.284 }, 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd3", 00:12:33.284 "bdev_name": "nvme2n2" 00:12:33.284 }, 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd4", 00:12:33.284 "bdev_name": "nvme2n3" 00:12:33.284 }, 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd5", 00:12:33.284 "bdev_name": "nvme3n1" 00:12:33.284 } 00:12:33.284 ]' 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd0", 00:12:33.284 "bdev_name": "nvme0n1" 00:12:33.284 }, 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd1", 00:12:33.284 "bdev_name": "nvme1n1" 00:12:33.284 }, 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd2", 00:12:33.284 "bdev_name": "nvme2n1" 00:12:33.284 }, 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd3", 00:12:33.284 "bdev_name": "nvme2n2" 00:12:33.284 }, 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd4", 00:12:33.284 "bdev_name": "nvme2n3" 00:12:33.284 }, 00:12:33.284 { 00:12:33.284 "nbd_device": "/dev/nbd5", 00:12:33.284 "bdev_name": "nvme3n1" 00:12:33.284 } 00:12:33.284 ]' 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.284 10:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:33.284 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:33.284 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:33.284 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:33.284 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.284 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.284 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:33.284 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.284 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.284 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.284 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:33.542 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:33.542 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:33.542 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:33.542 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.542 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.542 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:33.542 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.542 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.542 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.542 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:33.800 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:33.800 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:33.800 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:33.800 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.800 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.800 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:33.800 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.800 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.800 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.800 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:34.058 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:34.058 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:34.058 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:34.058 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.058 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.058 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:34.058 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.058 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.058 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.058 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:34.315 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:34.315 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:34.315 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:34.315 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.315 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.315 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:34.315 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.315 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.315 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.315 10:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:34.315 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:34.315 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:34.315 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:34.315 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.316 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.316 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:34.316 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.316 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.316 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:34.316 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.316 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:34.573 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:34.573 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:34.573 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:34.573 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:34.573 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:34.573 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:34.573 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:34.573 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:34.574 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:34.832 /dev/nbd0 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:34.832 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.833 1+0 records in 00:12:34.833 1+0 records out 00:12:34.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443181 s, 9.2 MB/s 00:12:34.833 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.833 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:34.833 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.833 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:34.833 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:34.833 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.833 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:34.833 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:35.091 /dev/nbd1 00:12:35.091 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:35.091 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.092 1+0 records in 00:12:35.092 1+0 records out 00:12:35.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592098 s, 6.9 MB/s 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:35.092 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:35.350 /dev/nbd10 00:12:35.350 10:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.350 1+0 records in 00:12:35.350 1+0 records out 00:12:35.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332644 s, 12.3 MB/s 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:35.350 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:35.608 /dev/nbd11 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.608 1+0 records in 00:12:35.608 1+0 records out 00:12:35.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492808 s, 8.3 MB/s 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:35.608 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:35.866 /dev/nbd12 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.866 1+0 records in 00:12:35.866 1+0 records out 00:12:35.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231348 s, 17.7 MB/s 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:35.866 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:36.158 /dev/nbd13 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.158 1+0 records in 00:12:36.158 1+0 records out 00:12:36.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513417 s, 8.0 MB/s 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd0", 00:12:36.158 "bdev_name": "nvme0n1" 00:12:36.158 }, 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd1", 00:12:36.158 "bdev_name": "nvme1n1" 00:12:36.158 }, 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd10", 00:12:36.158 "bdev_name": "nvme2n1" 00:12:36.158 }, 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd11", 00:12:36.158 "bdev_name": "nvme2n2" 00:12:36.158 }, 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd12", 00:12:36.158 "bdev_name": "nvme2n3" 00:12:36.158 }, 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd13", 00:12:36.158 "bdev_name": "nvme3n1" 00:12:36.158 } 00:12:36.158 ]' 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd0", 00:12:36.158 "bdev_name": "nvme0n1" 00:12:36.158 }, 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd1", 00:12:36.158 "bdev_name": "nvme1n1" 00:12:36.158 }, 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd10", 00:12:36.158 "bdev_name": "nvme2n1" 00:12:36.158 }, 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd11", 00:12:36.158 "bdev_name": "nvme2n2" 00:12:36.158 }, 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd12", 00:12:36.158 "bdev_name": "nvme2n3" 00:12:36.158 }, 00:12:36.158 { 00:12:36.158 "nbd_device": "/dev/nbd13", 00:12:36.158 "bdev_name": "nvme3n1" 00:12:36.158 } 00:12:36.158 ]' 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.158 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:36.158 /dev/nbd1 00:12:36.158 /dev/nbd10 00:12:36.158 /dev/nbd11 00:12:36.158 /dev/nbd12 00:12:36.158 /dev/nbd13' 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:36.423 /dev/nbd1 00:12:36.423 /dev/nbd10 00:12:36.423 /dev/nbd11 00:12:36.423 /dev/nbd12 00:12:36.423 /dev/nbd13' 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:36.423 256+0 records in 00:12:36.423 256+0 records out 00:12:36.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00912389 s, 115 MB/s 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.423 10:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:36.423 256+0 records in 00:12:36.423 256+0 records out 00:12:36.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0466906 s, 22.5 MB/s 00:12:36.423 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.423 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:36.423 256+0 records in 00:12:36.423 256+0 records out 00:12:36.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0600993 s, 17.4 MB/s 00:12:36.423 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.423 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:36.423 256+0 records in 00:12:36.423 256+0 records out 00:12:36.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0490057 s, 21.4 MB/s 00:12:36.423 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.424 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:36.424 256+0 records in 00:12:36.424 256+0 records out 00:12:36.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0474758 s, 22.1 MB/s 00:12:36.424 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.424 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:36.682 256+0 records in 00:12:36.682 256+0 records out 00:12:36.682 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0467785 s, 22.4 MB/s 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:36.682 256+0 records in 00:12:36.682 256+0 records out 00:12:36.682 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0447541 s, 23.4 MB/s 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.682 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.941 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:37.200 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:37.200 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:37.200 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:37.200 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.200 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.200 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:37.200 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.200 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.200 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.200 10:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:37.458 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:37.458 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:37.458 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:37.458 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.458 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.458 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:37.458 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.458 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.458 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.458 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:37.716 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:37.716 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:37.716 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:37.716 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.716 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.716 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:37.716 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.716 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.716 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.716 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:37.975 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:38.232 malloc_lvol_verify 00:12:38.232 10:33:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:38.490 491499a7-37da-4822-90dc-34e5bd438d58 00:12:38.490 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:38.749 772fdf57-2a58-446f-92ad-313811242c41 00:12:38.749 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:39.008 /dev/nbd0 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:39.008 mke2fs 1.47.0 (5-Feb-2023) 00:12:39.008 Discarding device blocks: 0/4096 done 00:12:39.008 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:39.008 00:12:39.008 Allocating group tables: 0/1 done 00:12:39.008 Writing inode tables: 0/1 done 00:12:39.008 Creating journal (1024 blocks): done 00:12:39.008 Writing superblocks and filesystem accounting information: 0/1 done 00:12:39.008 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.008 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:39.268 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:39.268 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:39.268 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:39.268 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.268 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.268 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:39.268 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:39.268 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.268 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 82602 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 82602 ']' 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 82602 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82602 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:39.269 killing process with pid 82602 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82602' 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 82602 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 82602 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:39.269 00:12:39.269 real 0m8.636s 00:12:39.269 user 0m12.642s 00:12:39.269 sys 0m3.035s 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.269 10:33:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:39.269 ************************************ 00:12:39.269 END TEST bdev_nbd 00:12:39.269 ************************************ 00:12:39.269 10:33:14 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:39.269 10:33:14 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:39.269 10:33:14 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:39.269 10:33:14 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:39.269 10:33:14 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:39.269 10:33:14 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.269 10:33:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.269 ************************************ 00:12:39.269 START TEST bdev_fio 00:12:39.269 ************************************ 00:12:39.269 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:39.269 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:39.531 ************************************ 00:12:39.531 START TEST bdev_fio_rw_verify 00:12:39.531 ************************************ 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:39.531 10:33:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:39.531 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.531 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.531 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.531 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.531 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.531 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.531 fio-3.35 00:12:39.531 Starting 6 threads 00:12:51.761 00:12:51.761 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82985: Sat Sep 28 10:33:24 2024 00:12:51.761 read: IOPS=17.6k, BW=68.9MiB/s (72.2MB/s)(689MiB/10004msec) 00:12:51.761 slat (usec): min=2, max=2675, avg= 5.65, stdev=19.83 00:12:51.761 clat (usec): min=73, max=8084, avg=1036.34, stdev=815.27 00:12:51.761 lat (usec): min=75, max=8090, avg=1041.99, stdev=816.12 00:12:51.761 clat percentiles (usec): 00:12:51.761 | 50.000th=[ 807], 99.000th=[ 3621], 99.900th=[ 5014], 99.990th=[ 6259], 00:12:51.761 | 99.999th=[ 8094] 00:12:51.761 write: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(699MiB/10004msec); 0 zone resets 00:12:51.761 slat (usec): min=5, max=4842, avg=38.25, stdev=141.69 00:12:51.761 clat (usec): min=72, max=11475, avg=1359.94, stdev=1034.95 00:12:51.761 lat (usec): min=86, max=11539, avg=1398.19, stdev=1049.35 00:12:51.762 clat percentiles (usec): 00:12:51.762 | 50.000th=[ 1106], 99.000th=[ 4948], 99.900th=[ 7373], 99.990th=[ 9110], 00:12:51.762 | 99.999th=[11076] 00:12:51.762 bw ( KiB/s): min=44591, max=126429, per=100.00%, avg=72909.79, stdev=4201.58, samples=114 00:12:51.762 iops : min=11147, max=31607, avg=18226.79, stdev=1050.40, samples=114 00:12:51.762 lat (usec) : 100=0.06%, 250=7.99%, 500=18.33%, 750=14.54%, 1000=10.94% 00:12:51.762 lat (msec) : 2=31.38%, 4=15.20%, 10=1.54%, 20=0.01% 00:12:51.762 cpu : usr=42.46%, sys=32.42%, ctx=5599, majf=0, minf=16901 00:12:51.762 IO depths : 1=11.3%, 2=23.6%, 4=51.2%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.762 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.762 issued rwts: total=176457,178942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.762 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:51.762 00:12:51.762 Run status group 0 (all jobs): 00:12:51.762 READ: bw=68.9MiB/s (72.2MB/s), 68.9MiB/s-68.9MiB/s (72.2MB/s-72.2MB/s), io=689MiB (723MB), run=10004-10004msec 00:12:51.762 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=699MiB (733MB), run=10004-10004msec 00:12:51.762 ----------------------------------------------------- 00:12:51.762 Suppressions used: 00:12:51.762 count bytes template 00:12:51.762 6 48 /usr/src/fio/parse.c 00:12:51.762 2362 226752 /usr/src/fio/iolog.c 00:12:51.762 1 8 libtcmalloc_minimal.so 00:12:51.762 1 904 libcrypto.so 00:12:51.762 ----------------------------------------------------- 00:12:51.762 00:12:51.762 00:12:51.762 real 0m11.050s 00:12:51.762 user 0m26.161s 00:12:51.762 sys 0m19.718s 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.762 ************************************ 00:12:51.762 END TEST bdev_fio_rw_verify 00:12:51.762 ************************************ 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "430e6e22-3dc0-44d1-a103-11f8ae53cc03"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "430e6e22-3dc0-44d1-a103-11f8ae53cc03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "005cca2f-9ee9-404b-b385-a08b12015f2c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "005cca2f-9ee9-404b-b385-a08b12015f2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "52adf586-21d2-458e-9131-c06ea5ca21cc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52adf586-21d2-458e-9131-c06ea5ca21cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6ae3ccb9-f47a-44bd-b523-311aaa710091"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6ae3ccb9-f47a-44bd-b523-311aaa710091",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c96f0e9a-1012-4b3d-aba2-008835e3cbc5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c96f0e9a-1012-4b3d-aba2-008835e3cbc5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "37ff3147-e698-43e3-aac1-e5dd355fb5de"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "37ff3147-e698-43e3-aac1-e5dd355fb5de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:51.762 /home/vagrant/spdk_repo/spdk 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:12:51.762 00:12:51.762 real 0m11.220s 00:12:51.762 user 0m26.245s 00:12:51.762 sys 0m19.780s 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.762 ************************************ 00:12:51.762 END TEST bdev_fio 00:12:51.762 ************************************ 00:12:51.762 10:33:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:51.762 10:33:25 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:51.762 10:33:25 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:51.762 10:33:25 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:51.762 10:33:25 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.762 10:33:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.762 ************************************ 00:12:51.762 START TEST bdev_verify 00:12:51.762 ************************************ 00:12:51.762 10:33:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:51.762 [2024-09-28 10:33:25.392072] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:51.762 [2024-09-28 10:33:25.392212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83149 ] 00:12:51.762 [2024-09-28 10:33:25.525470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:51.762 [2024-09-28 10:33:25.544516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:51.762 [2024-09-28 10:33:25.594936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.762 [2024-09-28 10:33:25.595023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.762 Running I/O for 5 seconds... 00:12:56.185 25216.00 IOPS, 98.50 MiB/s 24256.00 IOPS, 94.75 MiB/s 24298.00 IOPS, 94.91 MiB/s 23832.00 IOPS, 93.09 MiB/s 23558.40 IOPS, 92.02 MiB/s 00:12:56.185 Latency(us) 00:12:56.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.185 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0x0 length 0xa0000 00:12:56.185 nvme0n1 : 5.04 1877.97 7.34 0.00 0.00 68029.45 10687.41 75013.51 00:12:56.185 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0xa0000 length 0xa0000 00:12:56.185 nvme0n1 : 5.06 1822.12 7.12 0.00 0.00 70128.55 8267.62 62914.56 00:12:56.185 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0x0 length 0xbd0bd 00:12:56.185 nvme1n1 : 5.05 2357.61 9.21 0.00 0.00 53947.25 6377.16 65737.65 00:12:56.185 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:56.185 nvme1n1 : 5.04 2292.00 8.95 0.00 0.00 55639.76 5293.29 56865.08 00:12:56.185 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0x0 length 0x80000 00:12:56.185 nvme2n1 : 5.07 1970.37 7.70 0.00 0.00 64367.79 10889.06 67754.14 00:12:56.185 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0x80000 length 0x80000 00:12:56.185 nvme2n1 : 5.04 1853.30 7.24 0.00 0.00 68744.16 8217.21 60494.77 00:12:56.185 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0x0 length 0x80000 00:12:56.185 nvme2n2 : 5.07 1892.02 7.39 0.00 0.00 66895.18 5217.67 71383.83 00:12:56.185 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0x80000 length 0x80000 00:12:56.185 nvme2n2 : 5.04 1827.23 7.14 0.00 0.00 69609.67 6251.13 58478.28 00:12:56.185 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0x0 length 0x80000 00:12:56.185 nvme2n3 : 5.08 1891.47 7.39 0.00 0.00 66815.34 5948.65 73803.62 00:12:56.185 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0x80000 length 0x80000 00:12:56.185 nvme2n3 : 5.05 1799.75 7.03 0.00 0.00 70621.28 9175.04 63721.16 00:12:56.185 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0x0 length 0x20000 00:12:56.185 nvme3n1 : 5.08 1890.92 7.39 0.00 0.00 66718.76 4688.34 75820.11 00:12:56.185 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.185 Verification LBA range: start 0x20000 length 0x20000 00:12:56.185 nvme3n1 : 5.06 1822.66 7.12 0.00 0.00 69680.72 3188.58 63721.16 00:12:56.185 =================================================================================================================== 00:12:56.185 Total : 23297.41 91.01 0.00 0.00 65456.39 3188.58 75820.11 00:12:56.447 00:12:56.447 real 0m5.852s 00:12:56.447 user 0m9.307s 00:12:56.447 sys 0m1.448s 00:12:56.447 10:33:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.447 10:33:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:56.447 ************************************ 00:12:56.447 END TEST bdev_verify 00:12:56.447 ************************************ 00:12:56.708 10:33:31 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:56.708 10:33:31 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:56.708 10:33:31 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.708 10:33:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.708 ************************************ 00:12:56.708 START TEST bdev_verify_big_io 00:12:56.708 ************************************ 00:12:56.708 10:33:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:56.708 [2024-09-28 10:33:31.311734] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:56.708 [2024-09-28 10:33:31.311868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83241 ] 00:12:56.708 [2024-09-28 10:33:31.448042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:56.708 [2024-09-28 10:33:31.468626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:56.969 [2024-09-28 10:33:31.521504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.969 [2024-09-28 10:33:31.521579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.231 Running I/O for 5 seconds... 00:13:03.424 1584.00 IOPS, 99.00 MiB/s 2671.50 IOPS, 166.97 MiB/s 2895.67 IOPS, 180.98 MiB/s 00:13:03.424 Latency(us) 00:13:03.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.424 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.424 Verification LBA range: start 0x0 length 0xa000 00:13:03.425 nvme0n1 : 6.04 111.33 6.96 0.00 0.00 1113957.76 135508.28 1548666.09 00:13:03.425 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0xa000 length 0xa000 00:13:03.425 nvme0n1 : 6.02 113.57 7.10 0.00 0.00 1082298.00 115343.36 1213121.77 00:13:03.425 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0x0 length 0xbd0b 00:13:03.425 nvme1n1 : 6.01 143.82 8.99 0.00 0.00 829492.59 10132.87 1477685.56 00:13:03.425 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:03.425 nvme1n1 : 6.03 106.18 6.64 0.00 0.00 1144460.13 16031.11 1690627.15 00:13:03.425 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0x0 length 0x8000 00:13:03.425 nvme2n1 : 6.05 145.49 9.09 0.00 0.00 803084.10 112116.97 916294.10 00:13:03.425 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0x8000 length 0x8000 00:13:03.425 nvme2n1 : 6.06 68.63 4.29 0.00 0.00 1726092.21 103244.41 2297188.04 00:13:03.425 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0x0 length 0x8000 00:13:03.425 nvme2n2 : 6.03 132.64 8.29 0.00 0.00 844819.53 110503.78 1742249.35 00:13:03.425 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0x8000 length 0x8000 00:13:03.425 nvme2n2 : 6.04 112.76 7.05 0.00 0.00 1023703.56 120182.94 1651910.50 00:13:03.425 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0x0 length 0x8000 00:13:03.425 nvme2n3 : 6.03 148.50 9.28 0.00 0.00 736579.40 89128.96 725937.23 00:13:03.425 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0x8000 length 0x8000 00:13:03.425 nvme2n3 : 6.04 127.10 7.94 0.00 0.00 870101.50 12048.54 884030.23 00:13:03.425 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0x0 length 0x2000 00:13:03.425 nvme3n1 : 6.04 203.90 12.74 0.00 0.00 527121.99 8015.56 1464780.01 00:13:03.425 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.425 Verification LBA range: start 0x2000 length 0x2000 00:13:03.425 nvme3n1 : 6.05 74.02 4.63 0.00 0.00 1438999.74 17140.18 3845854.13 00:13:03.425 =================================================================================================================== 00:13:03.425 Total : 1487.93 93.00 0.00 0.00 930628.19 8015.56 3845854.13 00:13:03.686 00:13:03.686 real 0m6.973s 00:13:03.686 user 0m12.722s 00:13:03.686 sys 0m0.461s 00:13:03.686 10:33:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.686 ************************************ 00:13:03.686 10:33:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.686 END TEST bdev_verify_big_io 00:13:03.686 ************************************ 00:13:03.686 10:33:38 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:03.686 10:33:38 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:03.686 10:33:38 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.686 10:33:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.686 ************************************ 00:13:03.686 START TEST bdev_write_zeroes 00:13:03.686 ************************************ 00:13:03.686 10:33:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:03.686 [2024-09-28 10:33:38.361994] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:03.686 [2024-09-28 10:33:38.362137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83341 ] 00:13:03.948 [2024-09-28 10:33:38.494577] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:03.948 [2024-09-28 10:33:38.513873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.948 [2024-09-28 10:33:38.586052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.210 Running I/O for 1 seconds... 00:13:05.153 85408.00 IOPS, 333.62 MiB/s 00:13:05.153 Latency(us) 00:13:05.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.153 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:05.153 nvme0n1 : 1.02 13966.52 54.56 0.00 0.00 9154.03 6225.92 23290.49 00:13:05.153 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:05.153 nvme1n1 : 1.02 15532.91 60.68 0.00 0.00 8222.41 4814.38 21778.12 00:13:05.153 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:05.153 nvme2n1 : 1.02 13948.58 54.49 0.00 0.00 9111.32 6200.71 20769.87 00:13:05.153 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:05.153 nvme2n2 : 1.02 13932.83 54.43 0.00 0.00 9095.66 4990.82 20971.52 00:13:05.153 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:05.153 nvme2n3 : 1.02 13917.12 54.36 0.00 0.00 9096.36 4915.20 20971.52 00:13:05.153 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:05.153 nvme3n1 : 1.02 13901.17 54.30 0.00 0.00 9100.85 5948.65 19660.80 00:13:05.153 =================================================================================================================== 00:13:05.153 Total : 85199.13 332.81 0.00 0.00 8950.05 4814.38 23290.49 00:13:05.415 00:13:05.415 real 0m1.827s 00:13:05.415 user 0m1.099s 00:13:05.415 sys 0m0.545s 00:13:05.415 10:33:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.415 ************************************ 00:13:05.415 END TEST bdev_write_zeroes 00:13:05.415 ************************************ 00:13:05.415 10:33:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:05.415 10:33:40 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:05.415 10:33:40 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:05.415 10:33:40 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.415 10:33:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.415 ************************************ 00:13:05.415 START TEST bdev_json_nonenclosed 00:13:05.415 ************************************ 00:13:05.415 10:33:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:05.678 [2024-09-28 10:33:40.250385] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:05.678 [2024-09-28 10:33:40.250531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83383 ] 00:13:05.678 [2024-09-28 10:33:40.381424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:05.678 [2024-09-28 10:33:40.394278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.678 [2024-09-28 10:33:40.444981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.678 [2024-09-28 10:33:40.445091] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:05.678 [2024-09-28 10:33:40.445112] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:05.678 [2024-09-28 10:33:40.445129] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:05.939 00:13:05.939 real 0m0.366s 00:13:05.939 user 0m0.157s 00:13:05.939 sys 0m0.104s 00:13:05.939 10:33:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.939 ************************************ 00:13:05.939 END TEST bdev_json_nonenclosed 00:13:05.939 ************************************ 00:13:05.939 10:33:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:05.939 10:33:40 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:05.939 10:33:40 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:05.939 10:33:40 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.939 10:33:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.939 ************************************ 00:13:05.939 START TEST bdev_json_nonarray 00:13:05.939 ************************************ 00:13:05.939 10:33:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:05.939 [2024-09-28 10:33:40.688255] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:05.939 [2024-09-28 10:33:40.688402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83403 ] 00:13:06.201 [2024-09-28 10:33:40.820543] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:06.201 [2024-09-28 10:33:40.841127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.201 [2024-09-28 10:33:40.890918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.201 [2024-09-28 10:33:40.891065] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:06.201 [2024-09-28 10:33:40.891085] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:06.201 [2024-09-28 10:33:40.891104] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:06.463 00:13:06.463 real 0m0.375s 00:13:06.463 user 0m0.152s 00:13:06.463 sys 0m0.118s 00:13:06.463 10:33:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.463 10:33:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:06.463 ************************************ 00:13:06.463 END TEST bdev_json_nonarray 00:13:06.463 ************************************ 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:06.463 10:33:41 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:07.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:10.342 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.342 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.342 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.915 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:11.177 00:13:11.177 real 0m49.653s 00:13:11.177 user 1m14.037s 00:13:11.177 sys 0m35.437s 00:13:11.177 10:33:45 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.177 10:33:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.177 ************************************ 00:13:11.177 END TEST blockdev_xnvme 00:13:11.177 ************************************ 00:13:11.177 10:33:45 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:11.177 10:33:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:11.177 10:33:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.177 10:33:45 -- common/autotest_common.sh@10 -- # set +x 00:13:11.177 ************************************ 00:13:11.177 START TEST ublk 00:13:11.177 ************************************ 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:11.177 * Looking for test storage... 00:13:11.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:11.177 10:33:45 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.177 10:33:45 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.177 10:33:45 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.177 10:33:45 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.177 10:33:45 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.177 10:33:45 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.177 10:33:45 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.177 10:33:45 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.177 10:33:45 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.177 10:33:45 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.177 10:33:45 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.177 10:33:45 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:11.177 10:33:45 ublk -- scripts/common.sh@345 -- # : 1 00:13:11.177 10:33:45 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.177 10:33:45 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.177 10:33:45 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:11.177 10:33:45 ublk -- scripts/common.sh@353 -- # local d=1 00:13:11.177 10:33:45 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.177 10:33:45 ublk -- scripts/common.sh@355 -- # echo 1 00:13:11.177 10:33:45 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.177 10:33:45 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:11.177 10:33:45 ublk -- scripts/common.sh@353 -- # local d=2 00:13:11.177 10:33:45 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.177 10:33:45 ublk -- scripts/common.sh@355 -- # echo 2 00:13:11.177 10:33:45 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.177 10:33:45 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.177 10:33:45 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.177 10:33:45 ublk -- scripts/common.sh@368 -- # return 0 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:11.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.177 --rc genhtml_branch_coverage=1 00:13:11.177 --rc genhtml_function_coverage=1 00:13:11.177 --rc genhtml_legend=1 00:13:11.177 --rc geninfo_all_blocks=1 00:13:11.177 --rc geninfo_unexecuted_blocks=1 00:13:11.177 00:13:11.177 ' 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:11.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.177 --rc genhtml_branch_coverage=1 00:13:11.177 --rc genhtml_function_coverage=1 00:13:11.177 --rc genhtml_legend=1 00:13:11.177 --rc geninfo_all_blocks=1 00:13:11.177 --rc geninfo_unexecuted_blocks=1 00:13:11.177 00:13:11.177 ' 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:11.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.177 --rc genhtml_branch_coverage=1 00:13:11.177 --rc genhtml_function_coverage=1 00:13:11.177 --rc genhtml_legend=1 00:13:11.177 --rc geninfo_all_blocks=1 00:13:11.177 --rc geninfo_unexecuted_blocks=1 00:13:11.177 00:13:11.177 ' 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:11.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.177 --rc genhtml_branch_coverage=1 00:13:11.177 --rc genhtml_function_coverage=1 00:13:11.177 --rc genhtml_legend=1 00:13:11.177 --rc geninfo_all_blocks=1 00:13:11.177 --rc geninfo_unexecuted_blocks=1 00:13:11.177 00:13:11.177 ' 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:11.177 10:33:45 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:11.177 10:33:45 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:11.177 10:33:45 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:11.177 10:33:45 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:11.177 10:33:45 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:11.177 10:33:45 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:11.177 10:33:45 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:11.177 10:33:45 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:11.177 10:33:45 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.177 10:33:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:11.177 ************************************ 00:13:11.177 START TEST test_save_ublk_config 00:13:11.177 ************************************ 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=83689 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 83689 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83689 ']' 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.177 10:33:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:11.438 [2024-09-28 10:33:46.011749] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:11.438 [2024-09-28 10:33:46.011864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83689 ] 00:13:11.438 [2024-09-28 10:33:46.141148] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:11.438 [2024-09-28 10:33:46.163825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.699 [2024-09-28 10:33:46.226341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.270 10:33:46 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.270 10:33:46 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:12.270 10:33:46 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:12.270 10:33:46 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:12.270 10:33:46 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.270 10:33:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:12.270 [2024-09-28 10:33:46.857990] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:12.270 [2024-09-28 10:33:46.858360] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:12.270 malloc0 00:13:12.270 [2024-09-28 10:33:46.890134] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:12.270 [2024-09-28 10:33:46.890229] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:12.271 [2024-09-28 10:33:46.890239] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:12.271 [2024-09-28 10:33:46.890252] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:12.271 [2024-09-28 10:33:46.899096] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:12.271 [2024-09-28 10:33:46.899126] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:12.271 [2024-09-28 10:33:46.901672] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:12.271 [2024-09-28 10:33:46.901792] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:12.271 [2024-09-28 10:33:46.910398] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:12.271 0 00:13:12.271 10:33:46 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.271 10:33:46 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:12.271 10:33:46 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.271 10:33:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:12.533 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.533 10:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:12.533 "subsystems": [ 00:13:12.533 { 00:13:12.533 "subsystem": "fsdev", 00:13:12.533 "config": [ 00:13:12.533 { 00:13:12.533 "method": "fsdev_set_opts", 00:13:12.533 "params": { 00:13:12.533 "fsdev_io_pool_size": 65535, 00:13:12.533 "fsdev_io_cache_size": 256 00:13:12.533 } 00:13:12.533 } 00:13:12.533 ] 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "subsystem": "keyring", 00:13:12.533 "config": [] 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "subsystem": "iobuf", 00:13:12.533 "config": [ 00:13:12.533 { 00:13:12.533 "method": "iobuf_set_options", 00:13:12.533 "params": { 00:13:12.533 "small_pool_count": 8192, 00:13:12.533 "large_pool_count": 1024, 00:13:12.533 "small_bufsize": 8192, 00:13:12.533 "large_bufsize": 135168 00:13:12.533 } 00:13:12.533 } 00:13:12.533 ] 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "subsystem": "sock", 00:13:12.533 "config": [ 00:13:12.533 { 00:13:12.533 "method": "sock_set_default_impl", 00:13:12.533 "params": { 00:13:12.533 "impl_name": "posix" 00:13:12.533 } 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "method": "sock_impl_set_options", 00:13:12.533 "params": { 00:13:12.533 "impl_name": "ssl", 00:13:12.533 "recv_buf_size": 4096, 00:13:12.533 "send_buf_size": 4096, 00:13:12.533 "enable_recv_pipe": true, 00:13:12.533 "enable_quickack": false, 00:13:12.533 "enable_placement_id": 0, 00:13:12.533 "enable_zerocopy_send_server": true, 00:13:12.533 "enable_zerocopy_send_client": false, 00:13:12.533 "zerocopy_threshold": 0, 00:13:12.533 "tls_version": 0, 00:13:12.533 "enable_ktls": false 00:13:12.533 } 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "method": "sock_impl_set_options", 00:13:12.533 "params": { 00:13:12.533 "impl_name": "posix", 00:13:12.533 "recv_buf_size": 2097152, 00:13:12.533 "send_buf_size": 2097152, 00:13:12.533 "enable_recv_pipe": true, 00:13:12.533 "enable_quickack": false, 00:13:12.533 "enable_placement_id": 0, 00:13:12.533 "enable_zerocopy_send_server": true, 00:13:12.533 "enable_zerocopy_send_client": false, 00:13:12.533 "zerocopy_threshold": 0, 00:13:12.533 "tls_version": 0, 00:13:12.533 "enable_ktls": false 00:13:12.533 } 00:13:12.533 } 00:13:12.533 ] 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "subsystem": "vmd", 00:13:12.533 "config": [] 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "subsystem": "accel", 00:13:12.533 "config": [ 00:13:12.533 { 00:13:12.533 "method": "accel_set_options", 00:13:12.533 "params": { 00:13:12.533 "small_cache_size": 128, 00:13:12.533 "large_cache_size": 16, 00:13:12.533 "task_count": 2048, 00:13:12.533 "sequence_count": 2048, 00:13:12.533 "buf_count": 2048 00:13:12.533 } 00:13:12.533 } 00:13:12.533 ] 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "subsystem": "bdev", 00:13:12.533 "config": [ 00:13:12.533 { 00:13:12.533 "method": "bdev_set_options", 00:13:12.533 "params": { 00:13:12.533 "bdev_io_pool_size": 65535, 00:13:12.533 "bdev_io_cache_size": 256, 00:13:12.533 "bdev_auto_examine": true, 00:13:12.533 "iobuf_small_cache_size": 128, 00:13:12.533 "iobuf_large_cache_size": 16 00:13:12.533 } 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "method": "bdev_raid_set_options", 00:13:12.533 "params": { 00:13:12.533 "process_window_size_kb": 1024, 00:13:12.533 "process_max_bandwidth_mb_sec": 0 00:13:12.533 } 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "method": "bdev_iscsi_set_options", 00:13:12.533 "params": { 00:13:12.533 "timeout_sec": 30 00:13:12.533 } 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "method": "bdev_nvme_set_options", 00:13:12.533 "params": { 00:13:12.533 "action_on_timeout": "none", 00:13:12.533 "timeout_us": 0, 00:13:12.533 "timeout_admin_us": 0, 00:13:12.533 "keep_alive_timeout_ms": 10000, 00:13:12.533 "arbitration_burst": 0, 00:13:12.533 "low_priority_weight": 0, 00:13:12.533 "medium_priority_weight": 0, 00:13:12.533 "high_priority_weight": 0, 00:13:12.533 "nvme_adminq_poll_period_us": 10000, 00:13:12.533 "nvme_ioq_poll_period_us": 0, 00:13:12.533 "io_queue_requests": 0, 00:13:12.533 "delay_cmd_submit": true, 00:13:12.533 "transport_retry_count": 4, 00:13:12.533 "bdev_retry_count": 3, 00:13:12.533 "transport_ack_timeout": 0, 00:13:12.533 "ctrlr_loss_timeout_sec": 0, 00:13:12.533 "reconnect_delay_sec": 0, 00:13:12.533 "fast_io_fail_timeout_sec": 0, 00:13:12.533 "disable_auto_failback": false, 00:13:12.533 "generate_uuids": false, 00:13:12.533 "transport_tos": 0, 00:13:12.533 "nvme_error_stat": false, 00:13:12.533 "rdma_srq_size": 0, 00:13:12.533 "io_path_stat": false, 00:13:12.533 "allow_accel_sequence": false, 00:13:12.533 "rdma_max_cq_size": 0, 00:13:12.533 "rdma_cm_event_timeout_ms": 0, 00:13:12.533 "dhchap_digests": [ 00:13:12.533 "sha256", 00:13:12.533 "sha384", 00:13:12.533 "sha512" 00:13:12.533 ], 00:13:12.533 "dhchap_dhgroups": [ 00:13:12.533 "null", 00:13:12.533 "ffdhe2048", 00:13:12.533 "ffdhe3072", 00:13:12.533 "ffdhe4096", 00:13:12.533 "ffdhe6144", 00:13:12.533 "ffdhe8192" 00:13:12.533 ] 00:13:12.533 } 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "method": "bdev_nvme_set_hotplug", 00:13:12.533 "params": { 00:13:12.533 "period_us": 100000, 00:13:12.533 "enable": false 00:13:12.533 } 00:13:12.533 }, 00:13:12.533 { 00:13:12.533 "method": "bdev_malloc_create", 00:13:12.533 "params": { 00:13:12.533 "name": "malloc0", 00:13:12.533 "num_blocks": 8192, 00:13:12.533 "block_size": 4096, 00:13:12.533 "physical_block_size": 4096, 00:13:12.533 "uuid": "4025a79c-7529-447e-b674-740b50476e65", 00:13:12.533 "optimal_io_boundary": 0, 00:13:12.533 "md_size": 0, 00:13:12.533 "dif_type": 0, 00:13:12.533 "dif_is_head_of_md": false, 00:13:12.533 "dif_pi_format": 0 00:13:12.533 } 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "method": "bdev_wait_for_examine" 00:13:12.534 } 00:13:12.534 ] 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "subsystem": "scsi", 00:13:12.534 "config": null 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "subsystem": "scheduler", 00:13:12.534 "config": [ 00:13:12.534 { 00:13:12.534 "method": "framework_set_scheduler", 00:13:12.534 "params": { 00:13:12.534 "name": "static" 00:13:12.534 } 00:13:12.534 } 00:13:12.534 ] 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "subsystem": "vhost_scsi", 00:13:12.534 "config": [] 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "subsystem": "vhost_blk", 00:13:12.534 "config": [] 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "subsystem": "ublk", 00:13:12.534 "config": [ 00:13:12.534 { 00:13:12.534 "method": "ublk_create_target", 00:13:12.534 "params": { 00:13:12.534 "cpumask": "1" 00:13:12.534 } 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "method": "ublk_start_disk", 00:13:12.534 "params": { 00:13:12.534 "bdev_name": "malloc0", 00:13:12.534 "ublk_id": 0, 00:13:12.534 "num_queues": 1, 00:13:12.534 "queue_depth": 128 00:13:12.534 } 00:13:12.534 } 00:13:12.534 ] 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "subsystem": "nbd", 00:13:12.534 "config": [] 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "subsystem": "nvmf", 00:13:12.534 "config": [ 00:13:12.534 { 00:13:12.534 "method": "nvmf_set_config", 00:13:12.534 "params": { 00:13:12.534 "discovery_filter": "match_any", 00:13:12.534 "admin_cmd_passthru": { 00:13:12.534 "identify_ctrlr": false 00:13:12.534 }, 00:13:12.534 "dhchap_digests": [ 00:13:12.534 "sha256", 00:13:12.534 "sha384", 00:13:12.534 "sha512" 00:13:12.534 ], 00:13:12.534 "dhchap_dhgroups": [ 00:13:12.534 "null", 00:13:12.534 "ffdhe2048", 00:13:12.534 "ffdhe3072", 00:13:12.534 "ffdhe4096", 00:13:12.534 "ffdhe6144", 00:13:12.534 "ffdhe8192" 00:13:12.534 ] 00:13:12.534 } 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "method": "nvmf_set_max_subsystems", 00:13:12.534 "params": { 00:13:12.534 "max_subsystems": 1024 00:13:12.534 } 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "method": "nvmf_set_crdt", 00:13:12.534 "params": { 00:13:12.534 "crdt1": 0, 00:13:12.534 "crdt2": 0, 00:13:12.534 "crdt3": 0 00:13:12.534 } 00:13:12.534 } 00:13:12.534 ] 00:13:12.534 }, 00:13:12.534 { 00:13:12.534 "subsystem": "iscsi", 00:13:12.534 "config": [ 00:13:12.534 { 00:13:12.534 "method": "iscsi_set_options", 00:13:12.534 "params": { 00:13:12.534 "node_base": "iqn.2016-06.io.spdk", 00:13:12.534 "max_sessions": 128, 00:13:12.534 "max_connections_per_session": 2, 00:13:12.534 "max_queue_depth": 64, 00:13:12.534 "default_time2wait": 2, 00:13:12.534 "default_time2retain": 20, 00:13:12.534 "first_burst_length": 8192, 00:13:12.534 "immediate_data": true, 00:13:12.534 "allow_duplicated_isid": false, 00:13:12.534 "error_recovery_level": 0, 00:13:12.534 "nop_timeout": 60, 00:13:12.534 "nop_in_interval": 30, 00:13:12.534 "disable_chap": false, 00:13:12.534 "require_chap": false, 00:13:12.534 "mutual_chap": false, 00:13:12.534 "chap_group": 0, 00:13:12.534 "max_large_datain_per_connection": 64, 00:13:12.534 "max_r2t_per_connection": 4, 00:13:12.534 "pdu_pool_size": 36864, 00:13:12.534 "immediate_data_pool_size": 16384, 00:13:12.534 "data_out_pool_size": 2048 00:13:12.534 } 00:13:12.534 } 00:13:12.534 ] 00:13:12.534 } 00:13:12.534 ] 00:13:12.534 }' 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 83689 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83689 ']' 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83689 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83689 00:13:12.534 killing process with pid 83689 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83689' 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83689 00:13:12.534 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83689 00:13:12.796 [2024-09-28 10:33:47.481153] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:12.796 [2024-09-28 10:33:47.514092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:12.796 [2024-09-28 10:33:47.514249] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:12.796 [2024-09-28 10:33:47.520944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:12.796 [2024-09-28 10:33:47.521018] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:12.796 [2024-09-28 10:33:47.521035] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:12.796 [2024-09-28 10:33:47.521067] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:12.796 [2024-09-28 10:33:47.521229] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:13.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.370 10:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=83727 00:13:13.370 10:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 83727 00:13:13.370 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83727 ']' 00:13:13.370 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.370 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.370 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.370 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.370 10:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:13.370 10:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:13.370 10:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:13.370 "subsystems": [ 00:13:13.370 { 00:13:13.370 "subsystem": "fsdev", 00:13:13.370 "config": [ 00:13:13.370 { 00:13:13.370 "method": "fsdev_set_opts", 00:13:13.370 "params": { 00:13:13.370 "fsdev_io_pool_size": 65535, 00:13:13.370 "fsdev_io_cache_size": 256 00:13:13.370 } 00:13:13.370 } 00:13:13.370 ] 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "subsystem": "keyring", 00:13:13.370 "config": [] 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "subsystem": "iobuf", 00:13:13.370 "config": [ 00:13:13.370 { 00:13:13.370 "method": "iobuf_set_options", 00:13:13.370 "params": { 00:13:13.370 "small_pool_count": 8192, 00:13:13.370 "large_pool_count": 1024, 00:13:13.370 "small_bufsize": 8192, 00:13:13.370 "large_bufsize": 135168 00:13:13.370 } 00:13:13.370 } 00:13:13.370 ] 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "subsystem": "sock", 00:13:13.370 "config": [ 00:13:13.370 { 00:13:13.370 "method": "sock_set_default_impl", 00:13:13.370 "params": { 00:13:13.370 "impl_name": "posix" 00:13:13.370 } 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "method": "sock_impl_set_options", 00:13:13.370 "params": { 00:13:13.370 "impl_name": "ssl", 00:13:13.370 "recv_buf_size": 4096, 00:13:13.370 "send_buf_size": 4096, 00:13:13.370 "enable_recv_pipe": true, 00:13:13.370 "enable_quickack": false, 00:13:13.370 "enable_placement_id": 0, 00:13:13.370 "enable_zerocopy_send_server": true, 00:13:13.370 "enable_zerocopy_send_client": false, 00:13:13.370 "zerocopy_threshold": 0, 00:13:13.370 "tls_version": 0, 00:13:13.370 "enable_ktls": false 00:13:13.370 } 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "method": "sock_impl_set_options", 00:13:13.370 "params": { 00:13:13.370 "impl_name": "posix", 00:13:13.370 "recv_buf_size": 2097152, 00:13:13.370 "send_buf_size": 2097152, 00:13:13.370 "enable_recv_pipe": true, 00:13:13.370 "enable_quickack": false, 00:13:13.370 "enable_placement_id": 0, 00:13:13.370 "enable_zerocopy_send_server": true, 00:13:13.370 "enable_zerocopy_send_client": false, 00:13:13.370 "zerocopy_threshold": 0, 00:13:13.370 "tls_version": 0, 00:13:13.370 "enable_ktls": false 00:13:13.370 } 00:13:13.370 } 00:13:13.370 ] 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "subsystem": "vmd", 00:13:13.370 "config": [] 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "subsystem": "accel", 00:13:13.370 "config": [ 00:13:13.370 { 00:13:13.370 "method": "accel_set_options", 00:13:13.370 "params": { 00:13:13.370 "small_cache_size": 128, 00:13:13.370 "large_cache_size": 16, 00:13:13.370 "task_count": 2048, 00:13:13.370 "sequence_count": 2048, 00:13:13.370 "buf_count": 2048 00:13:13.370 } 00:13:13.370 } 00:13:13.370 ] 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "subsystem": "bdev", 00:13:13.370 "config": [ 00:13:13.370 { 00:13:13.370 "method": "bdev_set_options", 00:13:13.370 "params": { 00:13:13.370 "bdev_io_pool_size": 65535, 00:13:13.370 "bdev_io_cache_size": 256, 00:13:13.370 "bdev_auto_examine": true, 00:13:13.370 "iobuf_small_cache_size": 128, 00:13:13.370 "iobuf_large_cache_size": 16 00:13:13.370 } 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "method": "bdev_raid_set_options", 00:13:13.370 "params": { 00:13:13.370 "process_window_size_kb": 1024, 00:13:13.370 "process_max_bandwidth_mb_sec": 0 00:13:13.370 } 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "method": "bdev_iscsi_set_options", 00:13:13.370 "params": { 00:13:13.370 "timeout_sec": 30 00:13:13.370 } 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "method": "bdev_nvme_set_options", 00:13:13.370 "params": { 00:13:13.370 "action_on_timeout": "none", 00:13:13.370 "timeout_us": 0, 00:13:13.370 "timeout_admin_us": 0, 00:13:13.370 "keep_alive_timeout_ms": 10000, 00:13:13.370 "arbitration_burst": 0, 00:13:13.370 "low_priority_weight": 0, 00:13:13.370 "medium_priority_weight": 0, 00:13:13.370 "high_priority_weight": 0, 00:13:13.370 "nvme_adminq_poll_period_us": 10000, 00:13:13.370 "nvme_ioq_poll_period_us": 0, 00:13:13.370 "io_queue_requests": 0, 00:13:13.370 "delay_cmd_submit": true, 00:13:13.370 "transport_retry_count": 4, 00:13:13.370 "bdev_retry_count": 3, 00:13:13.370 "transport_ack_timeout": 0, 00:13:13.370 "ctrlr_loss_timeout_sec": 0, 00:13:13.370 "reconnect_delay_sec": 0, 00:13:13.370 "fast_io_fail_timeout_sec": 0, 00:13:13.370 "disable_auto_failback": false, 00:13:13.370 "generate_uuids": false, 00:13:13.370 "transport_tos": 0, 00:13:13.370 "nvme_error_stat": false, 00:13:13.370 "rdma_srq_size": 0, 00:13:13.370 "io_path_stat": false, 00:13:13.370 "allow_accel_sequence": false, 00:13:13.370 "rdma_max_cq_size": 0, 00:13:13.370 "rdma_cm_event_timeout_ms": 0, 00:13:13.370 "dhchap_digests": [ 00:13:13.370 "sha256", 00:13:13.370 "sha384", 00:13:13.370 "sha512" 00:13:13.370 ], 00:13:13.370 "dhchap_dhgroups": [ 00:13:13.370 "null", 00:13:13.370 "ffdhe2048", 00:13:13.370 "ffdhe3072", 00:13:13.370 "ffdhe4096", 00:13:13.370 "ffdhe6144", 00:13:13.370 "ffdhe8192" 00:13:13.370 ] 00:13:13.370 } 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "method": "bdev_nvme_set_hotplug", 00:13:13.370 "params": { 00:13:13.370 "period_us": 100000, 00:13:13.370 "enable": false 00:13:13.370 } 00:13:13.370 }, 00:13:13.370 { 00:13:13.370 "method": "bdev_malloc_create", 00:13:13.370 "params": { 00:13:13.370 "name": "malloc0", 00:13:13.370 "num_blocks": 8192, 00:13:13.370 "block_size": 4096, 00:13:13.370 "physical_block_size": 4096, 00:13:13.370 "uuid": "4025a79c-7529-447e-b674-740b50476e65", 00:13:13.370 "optimal_io_boundary": 0, 00:13:13.370 "md_size": 0, 00:13:13.370 "dif_type": 0, 00:13:13.370 "dif_is_head_of_md": false, 00:13:13.370 "dif_pi_format": 0 00:13:13.370 } 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "method": "bdev_wait_for_examine" 00:13:13.371 } 00:13:13.371 ] 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "subsystem": "scsi", 00:13:13.371 "config": null 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "subsystem": "scheduler", 00:13:13.371 "config": [ 00:13:13.371 { 00:13:13.371 "method": "framework_set_scheduler", 00:13:13.371 "params": { 00:13:13.371 "name": "static" 00:13:13.371 } 00:13:13.371 } 00:13:13.371 ] 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "subsystem": "vhost_scsi", 00:13:13.371 "config": [] 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "subsystem": "vhost_blk", 00:13:13.371 "config": [] 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "subsystem": "ublk", 00:13:13.371 "config": [ 00:13:13.371 { 00:13:13.371 "method": "ublk_create_target", 00:13:13.371 "params": { 00:13:13.371 "cpumask": "1" 00:13:13.371 } 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "method": "ublk_start_disk", 00:13:13.371 "params": { 00:13:13.371 "bdev_name": "malloc0", 00:13:13.371 "ublk_id": 0, 00:13:13.371 "num_queues": 1, 00:13:13.371 "queue_depth": 128 00:13:13.371 } 00:13:13.371 } 00:13:13.371 ] 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "subsystem": "nbd", 00:13:13.371 "config": [] 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "subsystem": "nvmf", 00:13:13.371 "config": [ 00:13:13.371 { 00:13:13.371 "method": "nvmf_set_config", 00:13:13.371 "params": { 00:13:13.371 "discovery_filter": "match_any", 00:13:13.371 "admin_cmd_passthru": { 00:13:13.371 "identify_ctrlr": false 00:13:13.371 }, 00:13:13.371 "dhchap_digests": [ 00:13:13.371 "sha256", 00:13:13.371 "sha384", 00:13:13.371 "sha512" 00:13:13.371 ], 00:13:13.371 "dhchap_dhgroups": [ 00:13:13.371 "null", 00:13:13.371 "ffdhe2048", 00:13:13.371 "ffdhe3072", 00:13:13.371 "ffdhe4096", 00:13:13.371 "ffdhe6144", 00:13:13.371 "ffdhe8192" 00:13:13.371 ] 00:13:13.371 } 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "method": "nvmf_set_max_subsystems", 00:13:13.371 "params": { 00:13:13.371 "max_subsystems": 1024 00:13:13.371 } 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "method": "nvmf_set_crdt", 00:13:13.371 "params": { 00:13:13.371 "crdt1": 0, 00:13:13.371 "crdt2": 0, 00:13:13.371 "crdt3": 0 00:13:13.371 } 00:13:13.371 } 00:13:13.371 ] 00:13:13.371 }, 00:13:13.371 { 00:13:13.371 "subsystem": "iscsi", 00:13:13.371 "config": [ 00:13:13.371 { 00:13:13.371 "method": "iscsi_set_options", 00:13:13.371 "params": { 00:13:13.371 "node_base": "iqn.2016-06.io.spdk", 00:13:13.371 "max_sessions": 128, 00:13:13.371 "max_connections_per_session": 2, 00:13:13.371 "max_queue_depth": 64, 00:13:13.371 "default_time2wait": 2, 00:13:13.371 "default_time2retain": 20, 00:13:13.371 "first_burst_length": 8192, 00:13:13.371 "immediate_data": true, 00:13:13.371 "allow_duplicated_isid": false, 00:13:13.371 "error_recovery_level": 0, 00:13:13.371 "nop_timeout": 60, 00:13:13.371 "nop_in_interval": 30, 00:13:13.371 "disable_chap": false, 00:13:13.371 "require_chap": false, 00:13:13.371 "mutual_chap": false, 00:13:13.371 "chap_group": 0, 00:13:13.371 "max_large_datain_per_connection": 64, 00:13:13.371 "max_r2t_per_connection": 4, 00:13:13.371 "pdu_pool_size": 36864, 00:13:13.371 "immediate_data_pool_size": 16384, 00:13:13.371 "data_out_pool_size": 2048 00:13:13.371 } 00:13:13.371 } 00:13:13.371 ] 00:13:13.371 } 00:13:13.371 ] 00:13:13.371 }' 00:13:13.371 [2024-09-28 10:33:48.064887] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:13.371 [2024-09-28 10:33:48.065286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83727 ] 00:13:13.633 [2024-09-28 10:33:48.196649] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:13.633 [2024-09-28 10:33:48.214277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.633 [2024-09-28 10:33:48.273753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.893 [2024-09-28 10:33:48.591982] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:13.893 [2024-09-28 10:33:48.592357] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:13.893 [2024-09-28 10:33:48.600141] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:13.893 [2024-09-28 10:33:48.600331] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:13.893 [2024-09-28 10:33:48.600365] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:13.893 [2024-09-28 10:33:48.600437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:13.893 [2024-09-28 10:33:48.609080] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:13.893 [2024-09-28 10:33:48.609222] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:13.893 [2024-09-28 10:33:48.616004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:13.893 [2024-09-28 10:33:48.616114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:13.893 [2024-09-28 10:33:48.632982] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:14.154 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.154 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:14.154 10:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:14.155 10:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:14.155 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.155 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 83727 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83727 ']' 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83727 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83727 00:13:14.415 killing process with pid 83727 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83727' 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83727 00:13:14.415 10:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83727 00:13:14.676 [2024-09-28 10:33:49.265866] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:14.676 [2024-09-28 10:33:49.304020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:14.676 [2024-09-28 10:33:49.304162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:14.676 [2024-09-28 10:33:49.312013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:14.676 [2024-09-28 10:33:49.312075] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:14.676 [2024-09-28 10:33:49.312090] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:14.676 [2024-09-28 10:33:49.312126] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:14.676 [2024-09-28 10:33:49.312284] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:15.249 10:33:49 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:15.249 ************************************ 00:13:15.249 END TEST test_save_ublk_config 00:13:15.249 ************************************ 00:13:15.249 00:13:15.249 real 0m3.876s 00:13:15.249 user 0m2.698s 00:13:15.249 sys 0m1.820s 00:13:15.249 10:33:49 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.249 10:33:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:15.249 10:33:49 ublk -- ublk/ublk.sh@139 -- # spdk_pid=83777 00:13:15.249 10:33:49 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:15.249 10:33:49 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:15.249 10:33:49 ublk -- ublk/ublk.sh@141 -- # waitforlisten 83777 00:13:15.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.249 10:33:49 ublk -- common/autotest_common.sh@831 -- # '[' -z 83777 ']' 00:13:15.249 10:33:49 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.249 10:33:49 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.249 10:33:49 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.249 10:33:49 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.249 10:33:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.249 [2024-09-28 10:33:49.938245] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:15.249 [2024-09-28 10:33:49.938389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83777 ] 00:13:15.510 [2024-09-28 10:33:50.068694] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:15.510 [2024-09-28 10:33:50.088773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:15.510 [2024-09-28 10:33:50.122547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.510 [2024-09-28 10:33:50.122602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.082 10:33:50 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.082 10:33:50 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:16.082 10:33:50 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:16.082 10:33:50 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:16.082 10:33:50 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.082 10:33:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.082 ************************************ 00:13:16.082 START TEST test_create_ublk 00:13:16.082 ************************************ 00:13:16.082 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:16.082 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:16.082 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.082 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.082 [2024-09-28 10:33:50.792983] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:16.082 [2024-09-28 10:33:50.794114] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:16.082 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.082 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:16.083 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:16.083 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.083 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.083 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.083 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:16.083 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:16.083 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.083 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.344 [2024-09-28 10:33:50.861124] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:16.344 [2024-09-28 10:33:50.861492] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:16.344 [2024-09-28 10:33:50.861507] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:16.344 [2024-09-28 10:33:50.861514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:16.344 [2024-09-28 10:33:50.869012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:16.344 [2024-09-28 10:33:50.869032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:16.344 [2024-09-28 10:33:50.876993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:16.344 [2024-09-28 10:33:50.877605] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:16.344 [2024-09-28 10:33:50.899988] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:16.344 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.344 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:16.344 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:16.344 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:16.344 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.344 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.344 10:33:50 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.344 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:16.344 { 00:13:16.344 "ublk_device": "/dev/ublkb0", 00:13:16.344 "id": 0, 00:13:16.344 "queue_depth": 512, 00:13:16.344 "num_queues": 4, 00:13:16.344 "bdev_name": "Malloc0" 00:13:16.344 } 00:13:16.344 ]' 00:13:16.344 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:16.344 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:16.344 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:16.344 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:16.344 10:33:50 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:16.344 10:33:51 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:16.344 10:33:51 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:16.344 10:33:51 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:16.344 10:33:51 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:16.344 10:33:51 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:16.344 10:33:51 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:16.344 10:33:51 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:16.604 fio: verification read phase will never start because write phase uses all of runtime 00:13:16.604 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:16.604 fio-3.35 00:13:16.604 Starting 1 process 00:13:26.600 00:13:26.600 fio_test: (groupid=0, jobs=1): err= 0: pid=83822: Sat Sep 28 10:34:01 2024 00:13:26.600 write: IOPS=17.9k, BW=69.8MiB/s (73.2MB/s)(698MiB/10001msec); 0 zone resets 00:13:26.600 clat (usec): min=35, max=10361, avg=55.14, stdev=151.29 00:13:26.600 lat (usec): min=35, max=10375, avg=55.59, stdev=151.31 00:13:26.600 clat percentiles (usec): 00:13:26.600 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 44], 00:13:26.600 | 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 49], 00:13:26.600 | 70.00th=[ 50], 80.00th=[ 51], 90.00th=[ 55], 95.00th=[ 61], 00:13:26.600 | 99.00th=[ 73], 99.50th=[ 95], 99.90th=[ 3392], 99.95th=[ 3654], 00:13:26.600 | 99.99th=[ 3982] 00:13:26.600 bw ( KiB/s): min=22976, max=81296, per=99.65%, avg=71246.74, stdev=18379.36, samples=19 00:13:26.600 iops : min= 5744, max=20324, avg=17811.68, stdev=4594.84, samples=19 00:13:26.600 lat (usec) : 50=75.01%, 100=24.51%, 250=0.20%, 500=0.03%, 750=0.01% 00:13:26.600 lat (usec) : 1000=0.01% 00:13:26.600 lat (msec) : 2=0.05%, 4=0.17%, 10=0.01%, 20=0.01% 00:13:26.600 cpu : usr=3.43%, sys=13.26%, ctx=178799, majf=0, minf=797 00:13:26.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.600 issued rwts: total=0,178763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.600 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.600 00:13:26.600 Run status group 0 (all jobs): 00:13:26.600 WRITE: bw=69.8MiB/s (73.2MB/s), 69.8MiB/s-69.8MiB/s (73.2MB/s-73.2MB/s), io=698MiB (732MB), run=10001-10001msec 00:13:26.600 00:13:26.600 Disk stats (read/write): 00:13:26.600 ublkb0: ios=0/176763, merge=0/0, ticks=0/8300, in_queue=8300, util=99.09% 00:13:26.600 10:34:01 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:26.600 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.600 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.600 [2024-09-28 10:34:01.319380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:26.600 [2024-09-28 10:34:01.351394] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:26.600 [2024-09-28 10:34:01.352300] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:26.601 [2024-09-28 10:34:01.358988] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:26.601 [2024-09-28 10:34:01.359211] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:26.601 [2024-09-28 10:34:01.359220] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:26.601 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 10:34:01 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:26.601 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:13:26.601 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:26.601 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:26.601 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.601 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:26.601 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.601 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:26.601 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 [2024-09-28 10:34:01.375049] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:26.858 request: 00:13:26.858 { 00:13:26.858 "ublk_id": 0, 00:13:26.858 "method": "ublk_stop_disk", 00:13:26.858 "req_id": 1 00:13:26.858 } 00:13:26.858 Got JSON-RPC error response 00:13:26.858 response: 00:13:26.858 { 00:13:26.858 "code": -19, 00:13:26.858 "message": "No such device" 00:13:26.859 } 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:26.859 10:34:01 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.859 [2024-09-28 10:34:01.391035] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:26.859 [2024-09-28 10:34:01.391984] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:26.859 [2024-09-28 10:34:01.392019] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.859 10:34:01 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.859 10:34:01 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:26.859 10:34:01 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.859 10:34:01 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:26.859 10:34:01 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:26.859 10:34:01 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:26.859 10:34:01 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.859 10:34:01 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:26.859 10:34:01 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:26.859 ************************************ 00:13:26.859 END TEST test_create_ublk 00:13:26.859 ************************************ 00:13:26.859 10:34:01 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:26.859 00:13:26.859 real 0m10.761s 00:13:26.859 user 0m0.654s 00:13:26.859 sys 0m1.398s 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.859 10:34:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.859 10:34:01 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:26.859 10:34:01 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:26.859 10:34:01 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.859 10:34:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.859 ************************************ 00:13:26.859 START TEST test_create_multi_ublk 00:13:26.859 ************************************ 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.859 [2024-09-28 10:34:01.594985] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:26.859 [2024-09-28 10:34:01.595886] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.859 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.118 [2024-09-28 10:34:01.667083] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:27.118 [2024-09-28 10:34:01.667366] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:27.118 [2024-09-28 10:34:01.667378] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:27.118 [2024-09-28 10:34:01.667391] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.118 [2024-09-28 10:34:01.690978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.118 [2024-09-28 10:34:01.691001] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.118 [2024-09-28 10:34:01.702976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.118 [2024-09-28 10:34:01.703459] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:27.118 [2024-09-28 10:34:01.733986] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.118 [2024-09-28 10:34:01.817073] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:27.118 [2024-09-28 10:34:01.817356] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:27.118 [2024-09-28 10:34:01.817369] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:27.118 [2024-09-28 10:34:01.817374] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.118 [2024-09-28 10:34:01.828993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.118 [2024-09-28 10:34:01.829010] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.118 [2024-09-28 10:34:01.840985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.118 [2024-09-28 10:34:01.841456] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:27.118 [2024-09-28 10:34:01.865986] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.118 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.377 [2024-09-28 10:34:01.949069] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:27.377 [2024-09-28 10:34:01.949355] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:27.377 [2024-09-28 10:34:01.949367] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:27.377 [2024-09-28 10:34:01.949373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.377 [2024-09-28 10:34:01.960990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.377 [2024-09-28 10:34:01.961010] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.377 [2024-09-28 10:34:01.972978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.377 [2024-09-28 10:34:01.973458] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:27.377 [2024-09-28 10:34:01.979989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.377 10:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.377 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.377 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:27.377 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:27.377 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.377 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.377 [2024-09-28 10:34:02.065078] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:27.377 [2024-09-28 10:34:02.065370] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:27.377 [2024-09-28 10:34:02.065383] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:27.377 [2024-09-28 10:34:02.065389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.377 [2024-09-28 10:34:02.076999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.377 [2024-09-28 10:34:02.077015] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.377 [2024-09-28 10:34:02.088984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.378 [2024-09-28 10:34:02.089464] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:27.378 [2024-09-28 10:34:02.108036] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.378 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.378 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:27.378 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:27.378 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.378 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.378 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.378 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:27.378 { 00:13:27.378 "ublk_device": "/dev/ublkb0", 00:13:27.378 "id": 0, 00:13:27.378 "queue_depth": 512, 00:13:27.378 "num_queues": 4, 00:13:27.378 "bdev_name": "Malloc0" 00:13:27.378 }, 00:13:27.378 { 00:13:27.378 "ublk_device": "/dev/ublkb1", 00:13:27.378 "id": 1, 00:13:27.378 "queue_depth": 512, 00:13:27.378 "num_queues": 4, 00:13:27.378 "bdev_name": "Malloc1" 00:13:27.378 }, 00:13:27.378 { 00:13:27.378 "ublk_device": "/dev/ublkb2", 00:13:27.378 "id": 2, 00:13:27.378 "queue_depth": 512, 00:13:27.378 "num_queues": 4, 00:13:27.378 "bdev_name": "Malloc2" 00:13:27.378 }, 00:13:27.378 { 00:13:27.378 "ublk_device": "/dev/ublkb3", 00:13:27.378 "id": 3, 00:13:27.378 "queue_depth": 512, 00:13:27.378 "num_queues": 4, 00:13:27.378 "bdev_name": "Malloc3" 00:13:27.378 } 00:13:27.378 ]' 00:13:27.378 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:27.378 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.378 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:27.636 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:27.894 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.153 [2024-09-28 10:34:02.761062] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:28.153 [2024-09-28 10:34:02.793020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:28.153 [2024-09-28 10:34:02.793667] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:28.153 [2024-09-28 10:34:02.800995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:28.153 [2024-09-28 10:34:02.801219] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:28.153 [2024-09-28 10:34:02.801233] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.153 [2024-09-28 10:34:02.817034] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:28.153 [2024-09-28 10:34:02.848354] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:28.153 [2024-09-28 10:34:02.849323] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:28.153 [2024-09-28 10:34:02.855994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:28.153 [2024-09-28 10:34:02.856195] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:28.153 [2024-09-28 10:34:02.856211] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.153 [2024-09-28 10:34:02.872059] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:28.153 [2024-09-28 10:34:02.904015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:28.153 [2024-09-28 10:34:02.904603] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:28.153 [2024-09-28 10:34:02.911985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:28.153 [2024-09-28 10:34:02.912199] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:28.153 [2024-09-28 10:34:02.912218] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.153 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.153 [2024-09-28 10:34:02.928035] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:28.412 [2024-09-28 10:34:02.960408] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:28.412 [2024-09-28 10:34:02.961303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:28.412 [2024-09-28 10:34:02.969008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:28.412 [2024-09-28 10:34:02.969225] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:28.412 [2024-09-28 10:34:02.969236] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:28.412 10:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.412 10:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:28.412 [2024-09-28 10:34:03.160039] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:28.412 [2024-09-28 10:34:03.160942] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:28.412 [2024-09-28 10:34:03.160985] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:28.412 10:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:28.412 10:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.412 10:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:28.412 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.412 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:28.671 10:34:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:28.930 10:34:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:28.930 10:34:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:28.930 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.930 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.930 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.930 10:34:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:28.930 10:34:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:28.930 ************************************ 00:13:28.930 END TEST test_create_multi_ublk 00:13:28.930 ************************************ 00:13:28.930 10:34:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:28.930 00:13:28.930 real 0m1.928s 00:13:28.930 user 0m0.802s 00:13:28.930 sys 0m0.136s 00:13:28.930 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.930 10:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.930 10:34:03 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:28.930 10:34:03 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:28.930 10:34:03 ublk -- ublk/ublk.sh@130 -- # killprocess 83777 00:13:28.930 10:34:03 ublk -- common/autotest_common.sh@950 -- # '[' -z 83777 ']' 00:13:28.930 10:34:03 ublk -- common/autotest_common.sh@954 -- # kill -0 83777 00:13:28.930 10:34:03 ublk -- common/autotest_common.sh@955 -- # uname 00:13:28.930 10:34:03 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.930 10:34:03 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83777 00:13:28.930 killing process with pid 83777 00:13:28.930 10:34:03 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:28.930 10:34:03 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:28.930 10:34:03 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83777' 00:13:28.930 10:34:03 ublk -- common/autotest_common.sh@969 -- # kill 83777 00:13:28.930 10:34:03 ublk -- common/autotest_common.sh@974 -- # wait 83777 00:13:29.188 [2024-09-28 10:34:03.726651] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:29.189 [2024-09-28 10:34:03.726702] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:29.446 00:13:29.446 real 0m18.243s 00:13:29.446 user 0m27.764s 00:13:29.446 sys 0m8.076s 00:13:29.446 ************************************ 00:13:29.446 END TEST ublk 00:13:29.447 ************************************ 00:13:29.447 10:34:04 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.447 10:34:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.447 10:34:04 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:29.447 10:34:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:29.447 10:34:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.447 10:34:04 -- common/autotest_common.sh@10 -- # set +x 00:13:29.447 ************************************ 00:13:29.447 START TEST ublk_recovery 00:13:29.447 ************************************ 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:29.447 * Looking for test storage... 00:13:29.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.447 10:34:04 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:29.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.447 --rc genhtml_branch_coverage=1 00:13:29.447 --rc genhtml_function_coverage=1 00:13:29.447 --rc genhtml_legend=1 00:13:29.447 --rc geninfo_all_blocks=1 00:13:29.447 --rc geninfo_unexecuted_blocks=1 00:13:29.447 00:13:29.447 ' 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:29.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.447 --rc genhtml_branch_coverage=1 00:13:29.447 --rc genhtml_function_coverage=1 00:13:29.447 --rc genhtml_legend=1 00:13:29.447 --rc geninfo_all_blocks=1 00:13:29.447 --rc geninfo_unexecuted_blocks=1 00:13:29.447 00:13:29.447 ' 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:29.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.447 --rc genhtml_branch_coverage=1 00:13:29.447 --rc genhtml_function_coverage=1 00:13:29.447 --rc genhtml_legend=1 00:13:29.447 --rc geninfo_all_blocks=1 00:13:29.447 --rc geninfo_unexecuted_blocks=1 00:13:29.447 00:13:29.447 ' 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:29.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.447 --rc genhtml_branch_coverage=1 00:13:29.447 --rc genhtml_function_coverage=1 00:13:29.447 --rc genhtml_legend=1 00:13:29.447 --rc geninfo_all_blocks=1 00:13:29.447 --rc geninfo_unexecuted_blocks=1 00:13:29.447 00:13:29.447 ' 00:13:29.447 10:34:04 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:29.447 10:34:04 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:29.447 10:34:04 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:29.447 10:34:04 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:29.447 10:34:04 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:29.447 10:34:04 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:29.447 10:34:04 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:29.447 10:34:04 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:29.447 10:34:04 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:29.447 10:34:04 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:29.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.447 10:34:04 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=84139 00:13:29.447 10:34:04 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:29.447 10:34:04 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 84139 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84139 ']' 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.447 10:34:04 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.447 10:34:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.706 [2024-09-28 10:34:04.263547] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:29.706 [2024-09-28 10:34:04.263661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84139 ] 00:13:29.706 [2024-09-28 10:34:04.392502] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:29.706 [2024-09-28 10:34:04.408725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.706 [2024-09-28 10:34:04.438586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.706 [2024-09-28 10:34:04.438654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:30.642 10:34:05 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.642 [2024-09-28 10:34:05.101984] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:30.642 [2024-09-28 10:34:05.102955] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.642 10:34:05 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.642 malloc0 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.642 10:34:05 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.642 [2024-09-28 10:34:05.134088] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:30.642 [2024-09-28 10:34:05.134168] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:30.642 [2024-09-28 10:34:05.134182] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:30.642 [2024-09-28 10:34:05.134187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:30.642 [2024-09-28 10:34:05.143056] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:30.642 [2024-09-28 10:34:05.143073] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:30.642 [2024-09-28 10:34:05.149984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:30.642 [2024-09-28 10:34:05.150098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:30.642 [2024-09-28 10:34:05.159983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:30.642 1 00:13:30.642 10:34:05 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.642 10:34:05 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:31.578 10:34:06 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=84172 00:13:31.578 10:34:06 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:31.578 10:34:06 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:31.578 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:31.578 fio-3.35 00:13:31.578 Starting 1 process 00:13:36.842 10:34:11 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 84139 00:13:36.842 10:34:11 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:42.109 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 84139 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:42.109 10:34:16 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=84286 00:13:42.109 10:34:16 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:42.109 10:34:16 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:42.109 10:34:16 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 84286 00:13:42.109 10:34:16 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84286 ']' 00:13:42.109 10:34:16 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.109 10:34:16 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.109 10:34:16 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.109 10:34:16 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.109 10:34:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:42.109 [2024-09-28 10:34:16.256688] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:42.109 [2024-09-28 10:34:16.257025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84286 ] 00:13:42.109 [2024-09-28 10:34:16.385716] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:42.109 [2024-09-28 10:34:16.407596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:42.109 [2024-09-28 10:34:16.439440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.109 [2024-09-28 10:34:16.439520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.381 10:34:16 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.381 10:34:16 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:42.381 10:34:17 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:42.381 10:34:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.381 10:34:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:42.381 [2024-09-28 10:34:17.008981] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:42.381 [2024-09-28 10:34:17.010072] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:42.381 10:34:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.381 10:34:17 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:42.381 10:34:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.381 10:34:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:42.381 malloc0 00:13:42.381 10:34:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.381 10:34:17 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:42.381 10:34:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.381 10:34:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:42.381 [2024-09-28 10:34:17.041104] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:42.381 [2024-09-28 10:34:17.041142] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:42.381 [2024-09-28 10:34:17.041152] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:42.381 [2024-09-28 10:34:17.049018] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:42.381 [2024-09-28 10:34:17.049047] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:13:42.381 [2024-09-28 10:34:17.049062] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:42.381 [2024-09-28 10:34:17.049134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:42.381 1 00:13:42.381 10:34:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.381 10:34:17 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 84172 00:13:42.381 [2024-09-28 10:34:17.056983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:42.381 [2024-09-28 10:34:17.063583] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:42.381 [2024-09-28 10:34:17.071217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:42.381 [2024-09-28 10:34:17.071241] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:38.611 00:14:38.611 fio_test: (groupid=0, jobs=1): err= 0: pid=84175: Sat Sep 28 10:35:06 2024 00:14:38.611 read: IOPS=29.6k, BW=116MiB/s (121MB/s)(6937MiB/60002msec) 00:14:38.611 slat (nsec): min=900, max=216409, avg=4761.07, stdev=1397.45 00:14:38.611 clat (usec): min=780, max=5905.3k, avg=2161.97, stdev=38614.23 00:14:38.611 lat (usec): min=790, max=5905.3k, avg=2166.73, stdev=38614.23 00:14:38.611 clat percentiles (usec): 00:14:38.611 | 1.00th=[ 1598], 5.00th=[ 1696], 10.00th=[ 1713], 20.00th=[ 1745], 00:14:38.611 | 30.00th=[ 1762], 40.00th=[ 1778], 50.00th=[ 1778], 60.00th=[ 1795], 00:14:38.611 | 70.00th=[ 1811], 80.00th=[ 1827], 90.00th=[ 1893], 95.00th=[ 2802], 00:14:38.611 | 99.00th=[ 4817], 99.50th=[ 5211], 99.90th=[ 6587], 99.95th=[ 7898], 00:14:38.611 | 99.99th=[12649] 00:14:38.611 bw ( KiB/s): min=31840, max=136488, per=100.00%, avg=130386.30, stdev=14743.90, samples=108 00:14:38.611 iops : min= 7960, max=34122, avg=32596.57, stdev=3685.98, samples=108 00:14:38.611 write: IOPS=29.6k, BW=115MiB/s (121MB/s)(6929MiB/60002msec); 0 zone resets 00:14:38.611 slat (nsec): min=909, max=544422, avg=4786.89, stdev=1607.13 00:14:38.611 clat (usec): min=655, max=5905.4k, avg=2154.66, stdev=31959.00 00:14:38.611 lat (usec): min=665, max=5905.4k, avg=2159.45, stdev=31959.00 00:14:38.611 clat percentiles (usec): 00:14:38.611 | 1.00th=[ 1614], 5.00th=[ 1778], 10.00th=[ 1795], 20.00th=[ 1827], 00:14:38.611 | 30.00th=[ 1844], 40.00th=[ 1860], 50.00th=[ 1876], 60.00th=[ 1876], 00:14:38.611 | 70.00th=[ 1893], 80.00th=[ 1909], 90.00th=[ 1958], 95.00th=[ 2704], 00:14:38.611 | 99.00th=[ 4752], 99.50th=[ 5276], 99.90th=[ 6587], 99.95th=[ 7898], 00:14:38.611 | 99.99th=[12780] 00:14:38.611 bw ( KiB/s): min=31216, max=136168, per=100.00%, avg=130253.11, stdev=14830.76, samples=108 00:14:38.611 iops : min= 7804, max=34042, avg=32563.28, stdev=3707.69, samples=108 00:14:38.611 lat (usec) : 750=0.01%, 1000=0.01% 00:14:38.611 lat (msec) : 2=91.61%, 4=6.08%, 10=2.29%, 20=0.02%, >=2000=0.01% 00:14:38.611 cpu : usr=6.19%, sys=29.29%, ctx=121361, majf=0, minf=13 00:14:38.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:38.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.611 issued rwts: total=1775840,1773944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.611 00:14:38.611 Run status group 0 (all jobs): 00:14:38.611 READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=6937MiB (7274MB), run=60002-60002msec 00:14:38.611 WRITE: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=6929MiB (7266MB), run=60002-60002msec 00:14:38.611 00:14:38.611 Disk stats (read/write): 00:14:38.611 ublkb1: ios=1772126/1770336, merge=0/0, ticks=3744874/3589512, in_queue=7334386, util=99.90% 00:14:38.611 10:35:06 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:38.611 [2024-09-28 10:35:06.429054] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:38.611 [2024-09-28 10:35:06.462091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:38.611 [2024-09-28 10:35:06.462298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:38.611 [2024-09-28 10:35:06.470986] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:38.611 [2024-09-28 10:35:06.471072] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:38.611 [2024-09-28 10:35:06.471080] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.611 10:35:06 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:38.611 [2024-09-28 10:35:06.481055] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:38.611 [2024-09-28 10:35:06.481949] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:38.611 [2024-09-28 10:35:06.481985] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.611 10:35:06 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:38.611 10:35:06 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:38.611 10:35:06 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 84286 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 84286 ']' 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 84286 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84286 00:14:38.611 killing process with pid 84286 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84286' 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@969 -- # kill 84286 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@974 -- # wait 84286 00:14:38.611 [2024-09-28 10:35:06.678329] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:38.611 [2024-09-28 10:35:06.678380] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:38.611 00:14:38.611 real 1m2.914s 00:14:38.611 user 1m40.419s 00:14:38.611 sys 0m36.136s 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:38.611 10:35:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:38.611 ************************************ 00:14:38.611 END TEST ublk_recovery 00:14:38.611 ************************************ 00:14:38.611 10:35:06 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:14:38.611 10:35:06 -- spdk/autotest.sh@256 -- # timing_exit lib 00:14:38.611 10:35:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:38.611 10:35:06 -- common/autotest_common.sh@10 -- # set +x 00:14:38.611 10:35:07 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:14:38.611 10:35:07 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:14:38.611 10:35:07 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:14:38.611 10:35:07 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:14:38.611 10:35:07 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:38.611 10:35:07 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:38.611 10:35:07 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:38.611 10:35:07 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:14:38.611 10:35:07 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:14:38.611 10:35:07 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:14:38.611 10:35:07 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:38.611 10:35:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:38.611 10:35:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.611 10:35:07 -- common/autotest_common.sh@10 -- # set +x 00:14:38.611 ************************************ 00:14:38.611 START TEST ftl 00:14:38.611 ************************************ 00:14:38.611 10:35:07 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:38.611 * Looking for test storage... 00:14:38.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:38.611 10:35:07 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:38.611 10:35:07 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:14:38.611 10:35:07 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:38.611 10:35:07 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:38.611 10:35:07 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.611 10:35:07 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.611 10:35:07 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.611 10:35:07 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.611 10:35:07 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.611 10:35:07 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.611 10:35:07 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.611 10:35:07 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.611 10:35:07 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.611 10:35:07 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.611 10:35:07 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.611 10:35:07 ftl -- scripts/common.sh@344 -- # case "$op" in 00:14:38.611 10:35:07 ftl -- scripts/common.sh@345 -- # : 1 00:14:38.611 10:35:07 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.611 10:35:07 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.611 10:35:07 ftl -- scripts/common.sh@365 -- # decimal 1 00:14:38.611 10:35:07 ftl -- scripts/common.sh@353 -- # local d=1 00:14:38.611 10:35:07 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.611 10:35:07 ftl -- scripts/common.sh@355 -- # echo 1 00:14:38.611 10:35:07 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.611 10:35:07 ftl -- scripts/common.sh@366 -- # decimal 2 00:14:38.611 10:35:07 ftl -- scripts/common.sh@353 -- # local d=2 00:14:38.611 10:35:07 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.611 10:35:07 ftl -- scripts/common.sh@355 -- # echo 2 00:14:38.611 10:35:07 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.611 10:35:07 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.611 10:35:07 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.611 10:35:07 ftl -- scripts/common.sh@368 -- # return 0 00:14:38.611 10:35:07 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.611 10:35:07 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:38.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.611 --rc genhtml_branch_coverage=1 00:14:38.611 --rc genhtml_function_coverage=1 00:14:38.611 --rc genhtml_legend=1 00:14:38.612 --rc geninfo_all_blocks=1 00:14:38.612 --rc geninfo_unexecuted_blocks=1 00:14:38.612 00:14:38.612 ' 00:14:38.612 10:35:07 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:38.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.612 --rc genhtml_branch_coverage=1 00:14:38.612 --rc genhtml_function_coverage=1 00:14:38.612 --rc genhtml_legend=1 00:14:38.612 --rc geninfo_all_blocks=1 00:14:38.612 --rc geninfo_unexecuted_blocks=1 00:14:38.612 00:14:38.612 ' 00:14:38.612 10:35:07 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:38.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.612 --rc genhtml_branch_coverage=1 00:14:38.612 --rc genhtml_function_coverage=1 00:14:38.612 --rc genhtml_legend=1 00:14:38.612 --rc geninfo_all_blocks=1 00:14:38.612 --rc geninfo_unexecuted_blocks=1 00:14:38.612 00:14:38.612 ' 00:14:38.612 10:35:07 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:38.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.612 --rc genhtml_branch_coverage=1 00:14:38.612 --rc genhtml_function_coverage=1 00:14:38.612 --rc genhtml_legend=1 00:14:38.612 --rc geninfo_all_blocks=1 00:14:38.612 --rc geninfo_unexecuted_blocks=1 00:14:38.612 00:14:38.612 ' 00:14:38.612 10:35:07 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:38.612 10:35:07 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:38.612 10:35:07 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:38.612 10:35:07 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:38.612 10:35:07 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:38.612 10:35:07 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:38.612 10:35:07 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.612 10:35:07 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:38.612 10:35:07 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:38.612 10:35:07 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.612 10:35:07 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.612 10:35:07 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:38.612 10:35:07 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:38.612 10:35:07 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:38.612 10:35:07 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:38.612 10:35:07 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:38.612 10:35:07 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:38.612 10:35:07 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.612 10:35:07 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.612 10:35:07 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:38.612 10:35:07 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:38.612 10:35:07 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:38.612 10:35:07 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:38.612 10:35:07 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:38.612 10:35:07 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:38.612 10:35:07 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:38.612 10:35:07 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:38.612 10:35:07 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:38.612 10:35:07 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:38.612 10:35:07 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.612 10:35:07 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:38.612 10:35:07 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:38.612 10:35:07 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:38.612 10:35:07 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:38.612 10:35:07 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:38.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:38.612 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:38.612 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:38.612 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:38.612 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:38.612 10:35:07 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=85083 00:14:38.612 10:35:07 ftl -- ftl/ftl.sh@38 -- # waitforlisten 85083 00:14:38.612 10:35:07 ftl -- common/autotest_common.sh@831 -- # '[' -z 85083 ']' 00:14:38.612 10:35:07 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:38.612 10:35:07 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.612 10:35:07 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.612 10:35:07 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.612 10:35:07 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.612 10:35:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:38.612 [2024-09-28 10:35:07.692115] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:14:38.612 [2024-09-28 10:35:07.692522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85083 ] 00:14:38.612 [2024-09-28 10:35:07.820450] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:38.612 [2024-09-28 10:35:07.837827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.612 [2024-09-28 10:35:07.866141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.612 10:35:08 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.612 10:35:08 ftl -- common/autotest_common.sh@864 -- # return 0 00:14:38.612 10:35:08 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:38.612 10:35:08 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@50 -- # break 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@63 -- # break 00:14:38.612 10:35:09 ftl -- ftl/ftl.sh@66 -- # killprocess 85083 00:14:38.612 10:35:09 ftl -- common/autotest_common.sh@950 -- # '[' -z 85083 ']' 00:14:38.612 10:35:09 ftl -- common/autotest_common.sh@954 -- # kill -0 85083 00:14:38.612 10:35:09 ftl -- common/autotest_common.sh@955 -- # uname 00:14:38.612 10:35:09 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.612 10:35:09 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85083 00:14:38.612 killing process with pid 85083 00:14:38.612 10:35:09 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:38.612 10:35:09 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:38.612 10:35:09 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85083' 00:14:38.612 10:35:09 ftl -- common/autotest_common.sh@969 -- # kill 85083 00:14:38.612 10:35:09 ftl -- common/autotest_common.sh@974 -- # wait 85083 00:14:38.612 10:35:10 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:14:38.612 10:35:10 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:38.612 10:35:10 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:38.612 10:35:10 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.612 10:35:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:38.612 ************************************ 00:14:38.612 START TEST ftl_fio_basic 00:14:38.612 ************************************ 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:38.612 * Looking for test storage... 00:14:38.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.612 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:38.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.613 --rc genhtml_branch_coverage=1 00:14:38.613 --rc genhtml_function_coverage=1 00:14:38.613 --rc genhtml_legend=1 00:14:38.613 --rc geninfo_all_blocks=1 00:14:38.613 --rc geninfo_unexecuted_blocks=1 00:14:38.613 00:14:38.613 ' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:38.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.613 --rc genhtml_branch_coverage=1 00:14:38.613 --rc genhtml_function_coverage=1 00:14:38.613 --rc genhtml_legend=1 00:14:38.613 --rc geninfo_all_blocks=1 00:14:38.613 --rc geninfo_unexecuted_blocks=1 00:14:38.613 00:14:38.613 ' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:38.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.613 --rc genhtml_branch_coverage=1 00:14:38.613 --rc genhtml_function_coverage=1 00:14:38.613 --rc genhtml_legend=1 00:14:38.613 --rc geninfo_all_blocks=1 00:14:38.613 --rc geninfo_unexecuted_blocks=1 00:14:38.613 00:14:38.613 ' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:38.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.613 --rc genhtml_branch_coverage=1 00:14:38.613 --rc genhtml_function_coverage=1 00:14:38.613 --rc genhtml_legend=1 00:14:38.613 --rc geninfo_all_blocks=1 00:14:38.613 --rc geninfo_unexecuted_blocks=1 00:14:38.613 00:14:38.613 ' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=85193 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 85193 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 85193 ']' 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.613 10:35:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:38.613 [2024-09-28 10:35:10.404559] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:14:38.613 [2024-09-28 10:35:10.404782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85193 ] 00:14:38.613 [2024-09-28 10:35:10.533400] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:38.613 [2024-09-28 10:35:10.552761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:38.613 [2024-09-28 10:35:10.582649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.613 [2024-09-28 10:35:10.582885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.613 [2024-09-28 10:35:10.582841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:38.613 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:38.613 { 00:14:38.613 "name": "nvme0n1", 00:14:38.613 "aliases": [ 00:14:38.614 "98908848-8e94-4b19-964d-eb12e37e247c" 00:14:38.614 ], 00:14:38.614 "product_name": "NVMe disk", 00:14:38.614 "block_size": 4096, 00:14:38.614 "num_blocks": 1310720, 00:14:38.614 "uuid": "98908848-8e94-4b19-964d-eb12e37e247c", 00:14:38.614 "numa_id": -1, 00:14:38.614 "assigned_rate_limits": { 00:14:38.614 "rw_ios_per_sec": 0, 00:14:38.614 "rw_mbytes_per_sec": 0, 00:14:38.614 "r_mbytes_per_sec": 0, 00:14:38.614 "w_mbytes_per_sec": 0 00:14:38.614 }, 00:14:38.614 "claimed": false, 00:14:38.614 "zoned": false, 00:14:38.614 "supported_io_types": { 00:14:38.614 "read": true, 00:14:38.614 "write": true, 00:14:38.614 "unmap": true, 00:14:38.614 "flush": true, 00:14:38.614 "reset": true, 00:14:38.614 "nvme_admin": true, 00:14:38.614 "nvme_io": true, 00:14:38.614 "nvme_io_md": false, 00:14:38.614 "write_zeroes": true, 00:14:38.614 "zcopy": false, 00:14:38.614 "get_zone_info": false, 00:14:38.614 "zone_management": false, 00:14:38.614 "zone_append": false, 00:14:38.614 "compare": true, 00:14:38.614 "compare_and_write": false, 00:14:38.614 "abort": true, 00:14:38.614 "seek_hole": false, 00:14:38.614 "seek_data": false, 00:14:38.614 "copy": true, 00:14:38.614 "nvme_iov_md": false 00:14:38.614 }, 00:14:38.614 "driver_specific": { 00:14:38.614 "nvme": [ 00:14:38.614 { 00:14:38.614 "pci_address": "0000:00:11.0", 00:14:38.614 "trid": { 00:14:38.614 "trtype": "PCIe", 00:14:38.614 "traddr": "0000:00:11.0" 00:14:38.614 }, 00:14:38.614 "ctrlr_data": { 00:14:38.614 "cntlid": 0, 00:14:38.614 "vendor_id": "0x1b36", 00:14:38.614 "model_number": "QEMU NVMe Ctrl", 00:14:38.614 "serial_number": "12341", 00:14:38.614 "firmware_revision": "8.0.0", 00:14:38.614 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:38.614 "oacs": { 00:14:38.614 "security": 0, 00:14:38.614 "format": 1, 00:14:38.614 "firmware": 0, 00:14:38.614 "ns_manage": 1 00:14:38.614 }, 00:14:38.614 "multi_ctrlr": false, 00:14:38.614 "ana_reporting": false 00:14:38.614 }, 00:14:38.614 "vs": { 00:14:38.614 "nvme_version": "1.4" 00:14:38.614 }, 00:14:38.614 "ns_data": { 00:14:38.614 "id": 1, 00:14:38.614 "can_share": false 00:14:38.614 } 00:14:38.614 } 00:14:38.614 ], 00:14:38.614 "mp_policy": "active_passive" 00:14:38.614 } 00:14:38.614 } 00:14:38.614 ]' 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:14:38.614 10:35:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=37f38d5a-d1cf-4c6f-9908-79968f04c500 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 37f38d5a-d1cf-4c6f-9908-79968f04c500 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:38.614 { 00:14:38.614 "name": "1a6d917b-b3e9-4b57-905c-5f71cebf2f29", 00:14:38.614 "aliases": [ 00:14:38.614 "lvs/nvme0n1p0" 00:14:38.614 ], 00:14:38.614 "product_name": "Logical Volume", 00:14:38.614 "block_size": 4096, 00:14:38.614 "num_blocks": 26476544, 00:14:38.614 "uuid": "1a6d917b-b3e9-4b57-905c-5f71cebf2f29", 00:14:38.614 "assigned_rate_limits": { 00:14:38.614 "rw_ios_per_sec": 0, 00:14:38.614 "rw_mbytes_per_sec": 0, 00:14:38.614 "r_mbytes_per_sec": 0, 00:14:38.614 "w_mbytes_per_sec": 0 00:14:38.614 }, 00:14:38.614 "claimed": false, 00:14:38.614 "zoned": false, 00:14:38.614 "supported_io_types": { 00:14:38.614 "read": true, 00:14:38.614 "write": true, 00:14:38.614 "unmap": true, 00:14:38.614 "flush": false, 00:14:38.614 "reset": true, 00:14:38.614 "nvme_admin": false, 00:14:38.614 "nvme_io": false, 00:14:38.614 "nvme_io_md": false, 00:14:38.614 "write_zeroes": true, 00:14:38.614 "zcopy": false, 00:14:38.614 "get_zone_info": false, 00:14:38.614 "zone_management": false, 00:14:38.614 "zone_append": false, 00:14:38.614 "compare": false, 00:14:38.614 "compare_and_write": false, 00:14:38.614 "abort": false, 00:14:38.614 "seek_hole": true, 00:14:38.614 "seek_data": true, 00:14:38.614 "copy": false, 00:14:38.614 "nvme_iov_md": false 00:14:38.614 }, 00:14:38.614 "driver_specific": { 00:14:38.614 "lvol": { 00:14:38.614 "lvol_store_uuid": "37f38d5a-d1cf-4c6f-9908-79968f04c500", 00:14:38.614 "base_bdev": "nvme0n1", 00:14:38.614 "thin_provision": true, 00:14:38.614 "num_allocated_clusters": 0, 00:14:38.614 "snapshot": false, 00:14:38.614 "clone": false, 00:14:38.614 "esnap_clone": false 00:14:38.614 } 00:14:38.614 } 00:14:38.614 } 00:14:38.614 ]' 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.614 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:38.614 { 00:14:38.614 "name": "1a6d917b-b3e9-4b57-905c-5f71cebf2f29", 00:14:38.614 "aliases": [ 00:14:38.614 "lvs/nvme0n1p0" 00:14:38.614 ], 00:14:38.614 "product_name": "Logical Volume", 00:14:38.614 "block_size": 4096, 00:14:38.614 "num_blocks": 26476544, 00:14:38.614 "uuid": "1a6d917b-b3e9-4b57-905c-5f71cebf2f29", 00:14:38.614 "assigned_rate_limits": { 00:14:38.614 "rw_ios_per_sec": 0, 00:14:38.614 "rw_mbytes_per_sec": 0, 00:14:38.614 "r_mbytes_per_sec": 0, 00:14:38.614 "w_mbytes_per_sec": 0 00:14:38.614 }, 00:14:38.614 "claimed": false, 00:14:38.614 "zoned": false, 00:14:38.614 "supported_io_types": { 00:14:38.614 "read": true, 00:14:38.614 "write": true, 00:14:38.614 "unmap": true, 00:14:38.614 "flush": false, 00:14:38.614 "reset": true, 00:14:38.614 "nvme_admin": false, 00:14:38.615 "nvme_io": false, 00:14:38.615 "nvme_io_md": false, 00:14:38.615 "write_zeroes": true, 00:14:38.615 "zcopy": false, 00:14:38.615 "get_zone_info": false, 00:14:38.615 "zone_management": false, 00:14:38.615 "zone_append": false, 00:14:38.615 "compare": false, 00:14:38.615 "compare_and_write": false, 00:14:38.615 "abort": false, 00:14:38.615 "seek_hole": true, 00:14:38.615 "seek_data": true, 00:14:38.615 "copy": false, 00:14:38.615 "nvme_iov_md": false 00:14:38.615 }, 00:14:38.615 "driver_specific": { 00:14:38.615 "lvol": { 00:14:38.615 "lvol_store_uuid": "37f38d5a-d1cf-4c6f-9908-79968f04c500", 00:14:38.615 "base_bdev": "nvme0n1", 00:14:38.615 "thin_provision": true, 00:14:38.615 "num_allocated_clusters": 0, 00:14:38.615 "snapshot": false, 00:14:38.615 "clone": false, 00:14:38.615 "esnap_clone": false 00:14:38.615 } 00:14:38.615 } 00:14:38.615 } 00:14:38.615 ]' 00:14:38.615 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:38.615 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:38.615 10:35:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:38.615 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:38.615 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1a6d917b-b3e9-4b57-905c-5f71cebf2f29 00:14:38.873 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:38.873 { 00:14:38.873 "name": "1a6d917b-b3e9-4b57-905c-5f71cebf2f29", 00:14:38.873 "aliases": [ 00:14:38.873 "lvs/nvme0n1p0" 00:14:38.873 ], 00:14:38.873 "product_name": "Logical Volume", 00:14:38.873 "block_size": 4096, 00:14:38.873 "num_blocks": 26476544, 00:14:38.873 "uuid": "1a6d917b-b3e9-4b57-905c-5f71cebf2f29", 00:14:38.873 "assigned_rate_limits": { 00:14:38.873 "rw_ios_per_sec": 0, 00:14:38.873 "rw_mbytes_per_sec": 0, 00:14:38.873 "r_mbytes_per_sec": 0, 00:14:38.873 "w_mbytes_per_sec": 0 00:14:38.873 }, 00:14:38.873 "claimed": false, 00:14:38.873 "zoned": false, 00:14:38.873 "supported_io_types": { 00:14:38.873 "read": true, 00:14:38.873 "write": true, 00:14:38.873 "unmap": true, 00:14:38.873 "flush": false, 00:14:38.873 "reset": true, 00:14:38.873 "nvme_admin": false, 00:14:38.873 "nvme_io": false, 00:14:38.873 "nvme_io_md": false, 00:14:38.873 "write_zeroes": true, 00:14:38.873 "zcopy": false, 00:14:38.873 "get_zone_info": false, 00:14:38.873 "zone_management": false, 00:14:38.873 "zone_append": false, 00:14:38.873 "compare": false, 00:14:38.873 "compare_and_write": false, 00:14:38.873 "abort": false, 00:14:38.873 "seek_hole": true, 00:14:38.873 "seek_data": true, 00:14:38.873 "copy": false, 00:14:38.873 "nvme_iov_md": false 00:14:38.873 }, 00:14:38.873 "driver_specific": { 00:14:38.873 "lvol": { 00:14:38.873 "lvol_store_uuid": "37f38d5a-d1cf-4c6f-9908-79968f04c500", 00:14:38.873 "base_bdev": "nvme0n1", 00:14:38.873 "thin_provision": true, 00:14:38.873 "num_allocated_clusters": 0, 00:14:38.874 "snapshot": false, 00:14:38.874 "clone": false, 00:14:38.874 "esnap_clone": false 00:14:38.874 } 00:14:38.874 } 00:14:38.874 } 00:14:38.874 ]' 00:14:38.874 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:38.874 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:38.874 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:38.874 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:38.874 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:38.874 10:35:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:38.874 10:35:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:38.874 10:35:13 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:38.874 10:35:13 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1a6d917b-b3e9-4b57-905c-5f71cebf2f29 -c nvc0n1p0 --l2p_dram_limit 60 00:14:39.133 [2024-09-28 10:35:13.700588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.700626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:39.133 [2024-09-28 10:35:13.700647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:39.133 [2024-09-28 10:35:13.700653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.700712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.700721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:39.133 [2024-09-28 10:35:13.700731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:14:39.133 [2024-09-28 10:35:13.700737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.700765] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:39.133 [2024-09-28 10:35:13.700987] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:39.133 [2024-09-28 10:35:13.701019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.701025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:39.133 [2024-09-28 10:35:13.701033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:14:39.133 [2024-09-28 10:35:13.701039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.701108] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8ba8cd4e-fbf9-49ff-922a-89ecd9a86588 00:14:39.133 [2024-09-28 10:35:13.702020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.702048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:39.133 [2024-09-28 10:35:13.702056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:14:39.133 [2024-09-28 10:35:13.702072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.706585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.706611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:39.133 [2024-09-28 10:35:13.706620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.468 ms 00:14:39.133 [2024-09-28 10:35:13.706632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.706704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.706720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:39.133 [2024-09-28 10:35:13.706726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:14:39.133 [2024-09-28 10:35:13.706734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.706769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.706778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:39.133 [2024-09-28 10:35:13.706784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:39.133 [2024-09-28 10:35:13.706790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.706827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:39.133 [2024-09-28 10:35:13.708068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.708179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:39.133 [2024-09-28 10:35:13.708194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:14:39.133 [2024-09-28 10:35:13.708200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.708229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.708244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:39.133 [2024-09-28 10:35:13.708253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:39.133 [2024-09-28 10:35:13.708260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.708282] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:39.133 [2024-09-28 10:35:13.708408] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:14:39.133 [2024-09-28 10:35:13.708418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:39.133 [2024-09-28 10:35:13.708427] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:14:39.133 [2024-09-28 10:35:13.708438] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:39.133 [2024-09-28 10:35:13.708444] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:39.133 [2024-09-28 10:35:13.708453] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:39.133 [2024-09-28 10:35:13.708466] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:39.133 [2024-09-28 10:35:13.708473] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:14:39.133 [2024-09-28 10:35:13.708478] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:14:39.133 [2024-09-28 10:35:13.708485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.708491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:39.133 [2024-09-28 10:35:13.708498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:14:39.133 [2024-09-28 10:35:13.708511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.708581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.133 [2024-09-28 10:35:13.708586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:39.133 [2024-09-28 10:35:13.708593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:14:39.133 [2024-09-28 10:35:13.708600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.133 [2024-09-28 10:35:13.708690] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:39.133 [2024-09-28 10:35:13.708697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:39.133 [2024-09-28 10:35:13.708704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:39.133 [2024-09-28 10:35:13.708710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:39.133 [2024-09-28 10:35:13.708725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:39.133 [2024-09-28 10:35:13.708730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:39.133 [2024-09-28 10:35:13.708737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:39.133 [2024-09-28 10:35:13.708741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:39.133 [2024-09-28 10:35:13.708748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:39.133 [2024-09-28 10:35:13.708753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:39.133 [2024-09-28 10:35:13.708759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:39.133 [2024-09-28 10:35:13.708764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:39.133 [2024-09-28 10:35:13.708773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:39.133 [2024-09-28 10:35:13.708779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:39.133 [2024-09-28 10:35:13.708786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:14:39.133 [2024-09-28 10:35:13.708791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:39.133 [2024-09-28 10:35:13.708798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:39.133 [2024-09-28 10:35:13.708804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:14:39.133 [2024-09-28 10:35:13.708811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:39.133 [2024-09-28 10:35:13.708817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:39.133 [2024-09-28 10:35:13.708824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:39.133 [2024-09-28 10:35:13.708829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:39.133 [2024-09-28 10:35:13.708836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:39.133 [2024-09-28 10:35:13.708842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:39.133 [2024-09-28 10:35:13.708849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:39.133 [2024-09-28 10:35:13.708868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:39.133 [2024-09-28 10:35:13.708875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:39.133 [2024-09-28 10:35:13.708881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:39.133 [2024-09-28 10:35:13.708891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:39.134 [2024-09-28 10:35:13.708897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:14:39.134 [2024-09-28 10:35:13.708904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:39.134 [2024-09-28 10:35:13.708910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:39.134 [2024-09-28 10:35:13.708917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:14:39.134 [2024-09-28 10:35:13.708922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:39.134 [2024-09-28 10:35:13.708929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:39.134 [2024-09-28 10:35:13.708935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:14:39.134 [2024-09-28 10:35:13.708942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:39.134 [2024-09-28 10:35:13.708948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:14:39.134 [2024-09-28 10:35:13.708955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:14:39.134 [2024-09-28 10:35:13.708969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:39.134 [2024-09-28 10:35:13.708977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:14:39.134 [2024-09-28 10:35:13.708983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:14:39.134 [2024-09-28 10:35:13.708990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:39.134 [2024-09-28 10:35:13.708996] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:39.134 [2024-09-28 10:35:13.709005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:39.134 [2024-09-28 10:35:13.709012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:39.134 [2024-09-28 10:35:13.709024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:39.134 [2024-09-28 10:35:13.709033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:39.134 [2024-09-28 10:35:13.709040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:39.134 [2024-09-28 10:35:13.709046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:39.134 [2024-09-28 10:35:13.709053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:39.134 [2024-09-28 10:35:13.709059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:39.134 [2024-09-28 10:35:13.709067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:39.134 [2024-09-28 10:35:13.709075] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:39.134 [2024-09-28 10:35:13.709084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:39.134 [2024-09-28 10:35:13.709091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:39.134 [2024-09-28 10:35:13.709099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:14:39.134 [2024-09-28 10:35:13.709107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:14:39.134 [2024-09-28 10:35:13.709114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:14:39.134 [2024-09-28 10:35:13.709121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:14:39.134 [2024-09-28 10:35:13.709130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:14:39.134 [2024-09-28 10:35:13.709136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:14:39.134 [2024-09-28 10:35:13.709144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:14:39.134 [2024-09-28 10:35:13.709150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:14:39.134 [2024-09-28 10:35:13.709158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:14:39.134 [2024-09-28 10:35:13.709164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:14:39.134 [2024-09-28 10:35:13.709171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:14:39.134 [2024-09-28 10:35:13.709177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:14:39.134 [2024-09-28 10:35:13.709184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:14:39.134 [2024-09-28 10:35:13.709190] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:39.134 [2024-09-28 10:35:13.709198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:39.134 [2024-09-28 10:35:13.709212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:39.134 [2024-09-28 10:35:13.709220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:39.134 [2024-09-28 10:35:13.709225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:39.134 [2024-09-28 10:35:13.709231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:39.134 [2024-09-28 10:35:13.709237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:39.134 [2024-09-28 10:35:13.709246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:39.134 [2024-09-28 10:35:13.709252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:14:39.134 [2024-09-28 10:35:13.709258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:39.134 [2024-09-28 10:35:13.709298] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:14:39.134 [2024-09-28 10:35:13.709306] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:14:41.665 [2024-09-28 10:35:16.040246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.040304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:41.665 [2024-09-28 10:35:16.040319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2330.941 ms 00:14:41.665 [2024-09-28 10:35:16.040330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.056379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.056444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:41.665 [2024-09-28 10:35:16.056464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.965 ms 00:14:41.665 [2024-09-28 10:35:16.056484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.056632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.056649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:41.665 [2024-09-28 10:35:16.056661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:14:41.665 [2024-09-28 10:35:16.056674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.066835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.067068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:41.665 [2024-09-28 10:35:16.067092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.078 ms 00:14:41.665 [2024-09-28 10:35:16.067108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.067167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.067182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:41.665 [2024-09-28 10:35:16.067194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:14:41.665 [2024-09-28 10:35:16.067207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.067594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.067619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:41.665 [2024-09-28 10:35:16.067665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:14:41.665 [2024-09-28 10:35:16.067682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.067863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.067880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:41.665 [2024-09-28 10:35:16.067893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:14:41.665 [2024-09-28 10:35:16.067907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.073228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.073263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:41.665 [2024-09-28 10:35:16.073271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.291 ms 00:14:41.665 [2024-09-28 10:35:16.073280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.081438] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:41.665 [2024-09-28 10:35:16.095332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.095362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:41.665 [2024-09-28 10:35:16.095375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.986 ms 00:14:41.665 [2024-09-28 10:35:16.095383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.128365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.128403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:41.665 [2024-09-28 10:35:16.128417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.953 ms 00:14:41.665 [2024-09-28 10:35:16.128425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.128613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.128626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:41.665 [2024-09-28 10:35:16.128635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:14:41.665 [2024-09-28 10:35:16.128642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.131423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.131463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:41.665 [2024-09-28 10:35:16.131477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.744 ms 00:14:41.665 [2024-09-28 10:35:16.131486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.133792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.133822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:41.665 [2024-09-28 10:35:16.133834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:14:41.665 [2024-09-28 10:35:16.133841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.134171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.134180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:41.665 [2024-09-28 10:35:16.134191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:14:41.665 [2024-09-28 10:35:16.134199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.153817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.153849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:41.665 [2024-09-28 10:35:16.153863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.590 ms 00:14:41.665 [2024-09-28 10:35:16.153870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.157518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.157549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:41.665 [2024-09-28 10:35:16.157561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.576 ms 00:14:41.665 [2024-09-28 10:35:16.157570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.160271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.160397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:14:41.665 [2024-09-28 10:35:16.160414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.663 ms 00:14:41.665 [2024-09-28 10:35:16.160422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.163213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.163239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:41.665 [2024-09-28 10:35:16.163254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.749 ms 00:14:41.665 [2024-09-28 10:35:16.163262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.163307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.163316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:41.665 [2024-09-28 10:35:16.163327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:41.665 [2024-09-28 10:35:16.163335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.163416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.665 [2024-09-28 10:35:16.163426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:41.665 [2024-09-28 10:35:16.163437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:14:41.665 [2024-09-28 10:35:16.163447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.665 [2024-09-28 10:35:16.164378] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2463.348 ms, result 0 00:14:41.665 { 00:14:41.665 "name": "ftl0", 00:14:41.665 "uuid": "8ba8cd4e-fbf9-49ff-922a-89ecd9a86588" 00:14:41.665 } 00:14:41.665 10:35:16 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:41.665 10:35:16 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:14:41.666 10:35:16 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:41.666 10:35:16 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:14:41.666 10:35:16 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:41.666 10:35:16 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:41.666 10:35:16 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:41.666 10:35:16 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:41.924 [ 00:14:41.924 { 00:14:41.924 "name": "ftl0", 00:14:41.924 "aliases": [ 00:14:41.924 "8ba8cd4e-fbf9-49ff-922a-89ecd9a86588" 00:14:41.924 ], 00:14:41.924 "product_name": "FTL disk", 00:14:41.924 "block_size": 4096, 00:14:41.924 "num_blocks": 20971520, 00:14:41.924 "uuid": "8ba8cd4e-fbf9-49ff-922a-89ecd9a86588", 00:14:41.924 "assigned_rate_limits": { 00:14:41.924 "rw_ios_per_sec": 0, 00:14:41.924 "rw_mbytes_per_sec": 0, 00:14:41.924 "r_mbytes_per_sec": 0, 00:14:41.924 "w_mbytes_per_sec": 0 00:14:41.924 }, 00:14:41.924 "claimed": false, 00:14:41.924 "zoned": false, 00:14:41.924 "supported_io_types": { 00:14:41.924 "read": true, 00:14:41.924 "write": true, 00:14:41.924 "unmap": true, 00:14:41.924 "flush": true, 00:14:41.924 "reset": false, 00:14:41.924 "nvme_admin": false, 00:14:41.924 "nvme_io": false, 00:14:41.924 "nvme_io_md": false, 00:14:41.924 "write_zeroes": true, 00:14:41.924 "zcopy": false, 00:14:41.924 "get_zone_info": false, 00:14:41.924 "zone_management": false, 00:14:41.924 "zone_append": false, 00:14:41.924 "compare": false, 00:14:41.924 "compare_and_write": false, 00:14:41.924 "abort": false, 00:14:41.924 "seek_hole": false, 00:14:41.924 "seek_data": false, 00:14:41.924 "copy": false, 00:14:41.924 "nvme_iov_md": false 00:14:41.924 }, 00:14:41.924 "driver_specific": { 00:14:41.924 "ftl": { 00:14:41.924 "base_bdev": "1a6d917b-b3e9-4b57-905c-5f71cebf2f29", 00:14:41.924 "cache": "nvc0n1p0" 00:14:41.924 } 00:14:41.924 } 00:14:41.924 } 00:14:41.924 ] 00:14:41.924 10:35:16 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:14:41.924 10:35:16 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:41.924 10:35:16 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:42.182 10:35:16 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:14:42.182 10:35:16 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:42.441 [2024-09-28 10:35:16.961024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.441 [2024-09-28 10:35:16.961066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:42.441 [2024-09-28 10:35:16.961078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:42.441 [2024-09-28 10:35:16.961088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.441 [2024-09-28 10:35:16.961118] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:42.441 [2024-09-28 10:35:16.961554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.441 [2024-09-28 10:35:16.961576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:42.441 [2024-09-28 10:35:16.961587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:14:42.441 [2024-09-28 10:35:16.961598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.441 [2024-09-28 10:35:16.962017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.441 [2024-09-28 10:35:16.962068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:42.441 [2024-09-28 10:35:16.962081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:14:42.441 [2024-09-28 10:35:16.962090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.441 [2024-09-28 10:35:16.965342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.441 [2024-09-28 10:35:16.965448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:42.441 [2024-09-28 10:35:16.965475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:14:42.441 [2024-09-28 10:35:16.965484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.441 [2024-09-28 10:35:16.971732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.441 [2024-09-28 10:35:16.971823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:14:42.441 [2024-09-28 10:35:16.971893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.220 ms 00:14:42.441 [2024-09-28 10:35:16.971915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.441 [2024-09-28 10:35:16.973345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.441 [2024-09-28 10:35:16.973447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:42.441 [2024-09-28 10:35:16.973498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:14:42.441 [2024-09-28 10:35:16.973520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.441 [2024-09-28 10:35:16.976763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.441 [2024-09-28 10:35:16.976866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:42.441 [2024-09-28 10:35:16.976923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.190 ms 00:14:42.441 [2024-09-28 10:35:16.976948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.441 [2024-09-28 10:35:16.977428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.441 [2024-09-28 10:35:16.977522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:42.441 [2024-09-28 10:35:16.977575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:14:42.441 [2024-09-28 10:35:16.977599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.441 [2024-09-28 10:35:16.978973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.442 [2024-09-28 10:35:16.979066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:14:42.442 [2024-09-28 10:35:16.979133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:14:42.442 [2024-09-28 10:35:16.979155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.442 [2024-09-28 10:35:16.980192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.442 [2024-09-28 10:35:16.980287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:14:42.442 [2024-09-28 10:35:16.980338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:14:42.442 [2024-09-28 10:35:16.980359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.442 [2024-09-28 10:35:16.981189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.442 [2024-09-28 10:35:16.981283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:42.442 [2024-09-28 10:35:16.981332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:14:42.442 [2024-09-28 10:35:16.981354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.442 [2024-09-28 10:35:16.982188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.442 [2024-09-28 10:35:16.982278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:42.442 [2024-09-28 10:35:16.982329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:14:42.442 [2024-09-28 10:35:16.982349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.442 [2024-09-28 10:35:16.982396] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:42.442 [2024-09-28 10:35:16.982494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.982533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.982562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.982673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.982703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.982733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.982865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.982897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.982926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.983940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.984835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:42.442 [2024-09-28 10:35:16.985339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:42.443 [2024-09-28 10:35:16.985550] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:42.443 [2024-09-28 10:35:16.985569] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ba8cd4e-fbf9-49ff-922a-89ecd9a86588 00:14:42.443 [2024-09-28 10:35:16.985576] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:42.443 [2024-09-28 10:35:16.985588] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:42.443 [2024-09-28 10:35:16.985595] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:42.443 [2024-09-28 10:35:16.985604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:42.443 [2024-09-28 10:35:16.985613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:42.443 [2024-09-28 10:35:16.985622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:42.443 [2024-09-28 10:35:16.985629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:42.443 [2024-09-28 10:35:16.985637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:42.443 [2024-09-28 10:35:16.985644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:42.443 [2024-09-28 10:35:16.985653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.443 [2024-09-28 10:35:16.985660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:42.443 [2024-09-28 10:35:16.985670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.257 ms 00:14:42.443 [2024-09-28 10:35:16.985677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:16.987357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.443 [2024-09-28 10:35:16.987433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:42.443 [2024-09-28 10:35:16.987494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:14:42.443 [2024-09-28 10:35:16.987515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:16.987637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.443 [2024-09-28 10:35:16.987679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:42.443 [2024-09-28 10:35:16.987733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:14:42.443 [2024-09-28 10:35:16.987754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:16.992893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:16.993018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:42.443 [2024-09-28 10:35:16.993083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:16.993105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:16.993203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:16.993232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:42.443 [2024-09-28 10:35:16.993279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:16.993336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:16.993440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:16.993532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:42.443 [2024-09-28 10:35:16.993558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:16.993577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:16.993610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:16.993731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:42.443 [2024-09-28 10:35:16.993756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:16.993776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:17.002715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:17.002847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:42.443 [2024-09-28 10:35:17.002903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:17.002928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:17.010487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:17.010609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:42.443 [2024-09-28 10:35:17.010667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:17.010689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:17.010778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:17.010809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:42.443 [2024-09-28 10:35:17.010832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:17.010850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:17.010924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:17.011054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:42.443 [2024-09-28 10:35:17.011093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:17.011113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:17.011216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:17.011246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:42.443 [2024-09-28 10:35:17.011271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:17.011353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:17.011432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:17.011467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:42.443 [2024-09-28 10:35:17.011488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:17.011508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:17.011613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:17.011649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:42.443 [2024-09-28 10:35:17.011673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:17.011691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:17.011774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:42.443 [2024-09-28 10:35:17.011832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:42.443 [2024-09-28 10:35:17.011854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:42.443 [2024-09-28 10:35:17.011872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.443 [2024-09-28 10:35:17.012059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.997 ms, result 0 00:14:42.443 true 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 85193 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 85193 ']' 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 85193 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85193 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.443 killing process with pid 85193 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85193' 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 85193 00:14:42.443 10:35:17 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 85193 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:47.703 10:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:47.703 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:47.703 fio-3.35 00:14:47.703 Starting 1 thread 00:14:51.907 00:14:51.907 test: (groupid=0, jobs=1): err= 0: pid=85356: Sat Sep 28 10:35:26 2024 00:14:51.907 read: IOPS=1177, BW=78.2MiB/s (82.0MB/s)(255MiB/3256msec) 00:14:51.907 slat (nsec): min=2940, max=24997, avg=4338.84, stdev=1934.68 00:14:51.907 clat (usec): min=236, max=1250, avg=382.32, stdev=90.74 00:14:51.907 lat (usec): min=241, max=1254, avg=386.66, stdev=91.30 00:14:51.907 clat percentiles (usec): 00:14:51.907 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 289], 20.00th=[ 318], 00:14:51.907 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 388], 00:14:51.907 | 70.00th=[ 424], 80.00th=[ 461], 90.00th=[ 523], 95.00th=[ 529], 00:14:51.907 | 99.00th=[ 611], 99.50th=[ 693], 99.90th=[ 873], 99.95th=[ 922], 00:14:51.907 | 99.99th=[ 1254] 00:14:51.907 write: IOPS=1185, BW=78.7MiB/s (82.5MB/s)(256MiB/3253msec); 0 zone resets 00:14:51.907 slat (nsec): min=13327, max=47470, avg=18171.01, stdev=3134.44 00:14:51.907 clat (usec): min=258, max=1228, avg=429.31, stdev=121.66 00:14:51.907 lat (usec): min=277, max=1247, avg=447.48, stdev=122.47 00:14:51.907 clat percentiles (usec): 00:14:51.907 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 338], 00:14:51.907 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 474], 00:14:51.907 | 70.00th=[ 486], 80.00th=[ 545], 90.00th=[ 562], 95.00th=[ 619], 00:14:51.907 | 99.00th=[ 873], 99.50th=[ 971], 99.90th=[ 1139], 99.95th=[ 1205], 00:14:51.907 | 99.99th=[ 1237] 00:14:51.907 bw ( KiB/s): min=65552, max=99280, per=98.79%, avg=79628.33, stdev=14615.88, samples=6 00:14:51.907 iops : min= 964, max= 1460, avg=1171.00, stdev=214.94, samples=6 00:14:51.907 lat (usec) : 250=0.09%, 500=78.50%, 750=20.41%, 1000=0.81% 00:14:51.907 lat (msec) : 2=0.20% 00:14:51.907 cpu : usr=99.29%, sys=0.06%, ctx=7, majf=0, minf=1181 00:14:51.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:51.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.907 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:51.907 00:14:51.907 Run status group 0 (all jobs): 00:14:51.907 READ: bw=78.2MiB/s (82.0MB/s), 78.2MiB/s-78.2MiB/s (82.0MB/s-82.0MB/s), io=255MiB (267MB), run=3256-3256msec 00:14:51.907 WRITE: bw=78.7MiB/s (82.5MB/s), 78.7MiB/s-78.7MiB/s (82.5MB/s-82.5MB/s), io=256MiB (269MB), run=3253-3253msec 00:14:52.168 ----------------------------------------------------- 00:14:52.168 Suppressions used: 00:14:52.168 count bytes template 00:14:52.168 1 5 /usr/src/fio/parse.c 00:14:52.168 1 8 libtcmalloc_minimal.so 00:14:52.168 1 904 libcrypto.so 00:14:52.168 ----------------------------------------------------- 00:14:52.168 00:14:52.168 10:35:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:52.168 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:52.168 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:52.168 10:35:26 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:52.168 10:35:26 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:52.168 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:52.168 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:52.168 10:35:26 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:52.168 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:52.169 10:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:52.169 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:52.169 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:52.169 fio-3.35 00:14:52.169 Starting 2 threads 00:15:14.093 00:15:14.093 first_half: (groupid=0, jobs=1): err= 0: pid=85437: Sat Sep 28 10:35:48 2024 00:15:14.093 read: IOPS=3166, BW=12.4MiB/s (13.0MB/s)(256MiB/20679msec) 00:15:14.093 slat (nsec): min=3044, max=17711, avg=3640.76, stdev=492.65 00:15:14.093 clat (usec): min=459, max=249469, avg=34558.12, stdev=20465.59 00:15:14.093 lat (usec): min=463, max=249477, avg=34561.76, stdev=20465.64 00:15:14.093 clat percentiles (msec): 00:15:14.093 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:15:14.093 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 30], 00:15:14.093 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 69], 00:15:14.093 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 186], 99.95th=[ 220], 00:15:14.093 | 99.99th=[ 245] 00:15:14.093 write: IOPS=3173, BW=12.4MiB/s (13.0MB/s)(256MiB/20652msec); 0 zone resets 00:15:14.093 slat (usec): min=3, max=466, avg= 4.99, stdev= 2.80 00:15:14.093 clat (usec): min=350, max=37745, avg=5835.58, stdev=5698.17 00:15:14.093 lat (usec): min=356, max=37750, avg=5840.57, stdev=5698.20 00:15:14.093 clat percentiles (usec): 00:15:14.093 | 1.00th=[ 701], 5.00th=[ 807], 10.00th=[ 1106], 20.00th=[ 2409], 00:15:14.093 | 30.00th=[ 3163], 40.00th=[ 3949], 50.00th=[ 4686], 60.00th=[ 5145], 00:15:14.093 | 70.00th=[ 5407], 80.00th=[ 6128], 90.00th=[12125], 95.00th=[18744], 00:15:14.093 | 99.00th=[29230], 99.50th=[30016], 99.90th=[32113], 99.95th=[34866], 00:15:14.093 | 99.99th=[37487] 00:15:14.093 bw ( KiB/s): min= 1139, max=48200, per=97.72%, avg=24807.38, stdev=14559.19, samples=21 00:15:14.093 iops : min= 284, max=12050, avg=6201.81, stdev=3639.86, samples=21 00:15:14.093 lat (usec) : 500=0.03%, 750=1.30%, 1000=3.20% 00:15:14.093 lat (msec) : 2=3.49%, 4=12.41%, 10=23.66%, 20=5.03%, 50=47.84% 00:15:14.093 lat (msec) : 100=1.43%, 250=1.62% 00:15:14.093 cpu : usr=99.34%, sys=0.10%, ctx=27, majf=0, minf=5563 00:15:14.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:14.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.094 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:14.094 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:14.094 second_half: (groupid=0, jobs=1): err= 0: pid=85438: Sat Sep 28 10:35:48 2024 00:15:14.094 read: IOPS=3189, BW=12.5MiB/s (13.1MB/s)(256MiB/20535msec) 00:15:14.094 slat (nsec): min=3010, max=35362, avg=3613.26, stdev=490.34 00:15:14.094 clat (msec): min=9, max=169, avg=34.67, stdev=17.92 00:15:14.094 lat (msec): min=9, max=169, avg=34.68, stdev=17.92 00:15:14.094 clat percentiles (msec): 00:15:14.094 | 1.00th=[ 26], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:15:14.094 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 30], 00:15:14.094 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 62], 00:15:14.094 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 159], 00:15:14.094 | 99.99th=[ 167] 00:15:14.094 write: IOPS=3209, BW=12.5MiB/s (13.1MB/s)(256MiB/20421msec); 0 zone resets 00:15:14.094 slat (usec): min=3, max=287, avg= 4.93, stdev= 2.74 00:15:14.094 clat (usec): min=357, max=33168, avg=5446.21, stdev=3649.93 00:15:14.094 lat (usec): min=364, max=33172, avg=5451.14, stdev=3650.00 00:15:14.094 clat percentiles (usec): 00:15:14.094 | 1.00th=[ 799], 5.00th=[ 1450], 10.00th=[ 2311], 20.00th=[ 3163], 00:15:14.094 | 30.00th=[ 3687], 40.00th=[ 4228], 50.00th=[ 4686], 60.00th=[ 5145], 00:15:14.094 | 70.00th=[ 5407], 80.00th=[ 6063], 90.00th=[10552], 95.00th=[12649], 00:15:14.094 | 99.00th=[19530], 99.50th=[23987], 99.90th=[31327], 99.95th=[31851], 00:15:14.094 | 99.99th=[32637] 00:15:14.094 bw ( KiB/s): min= 536, max=41808, per=98.33%, avg=24962.24, stdev=16880.88, samples=21 00:15:14.094 iops : min= 134, max=10452, avg=6240.52, stdev=4220.19, samples=21 00:15:14.094 lat (usec) : 500=0.04%, 750=0.32%, 1000=0.91% 00:15:14.094 lat (msec) : 2=2.46%, 4=13.96%, 10=26.40%, 20=5.58%, 50=47.28% 00:15:14.094 lat (msec) : 100=1.71%, 250=1.34% 00:15:14.094 cpu : usr=99.47%, sys=0.10%, ctx=82, majf=0, minf=5579 00:15:14.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:14.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.094 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:14.094 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:14.094 00:15:14.094 Run status group 0 (all jobs): 00:15:14.094 READ: bw=24.7MiB/s (25.9MB/s), 12.4MiB/s-12.5MiB/s (13.0MB/s-13.1MB/s), io=512MiB (536MB), run=20535-20679msec 00:15:14.094 WRITE: bw=24.8MiB/s (26.0MB/s), 12.4MiB/s-12.5MiB/s (13.0MB/s-13.1MB/s), io=512MiB (537MB), run=20421-20652msec 00:15:14.666 ----------------------------------------------------- 00:15:14.666 Suppressions used: 00:15:14.666 count bytes template 00:15:14.666 2 10 /usr/src/fio/parse.c 00:15:14.666 4 384 /usr/src/fio/iolog.c 00:15:14.666 1 8 libtcmalloc_minimal.so 00:15:14.666 1 904 libcrypto.so 00:15:14.666 ----------------------------------------------------- 00:15:14.666 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:14.666 10:35:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:14.928 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:14.928 fio-3.35 00:15:14.928 Starting 1 thread 00:15:29.817 00:15:29.817 test: (groupid=0, jobs=1): err= 0: pid=85706: Sat Sep 28 10:36:02 2024 00:15:29.817 read: IOPS=8505, BW=33.2MiB/s (34.8MB/s)(255MiB/7666msec) 00:15:29.817 slat (nsec): min=3020, max=16646, avg=3470.18, stdev=678.43 00:15:29.817 clat (usec): min=487, max=30337, avg=15040.95, stdev=1490.75 00:15:29.817 lat (usec): min=491, max=30340, avg=15044.42, stdev=1490.78 00:15:29.817 clat percentiles (usec): 00:15:29.817 | 1.00th=[13173], 5.00th=[13304], 10.00th=[13566], 20.00th=[14615], 00:15:29.817 | 30.00th=[14746], 40.00th=[14877], 50.00th=[14877], 60.00th=[15008], 00:15:29.817 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15664], 95.00th=[16909], 00:15:29.817 | 99.00th=[22938], 99.50th=[24249], 99.90th=[24773], 99.95th=[26608], 00:15:29.817 | 99.99th=[29754] 00:15:29.817 write: IOPS=15.5k, BW=60.6MiB/s (63.5MB/s)(256MiB/4227msec); 0 zone resets 00:15:29.817 slat (usec): min=4, max=113, avg= 5.75, stdev= 2.19 00:15:29.817 clat (usec): min=486, max=44353, avg=8212.47, stdev=9666.02 00:15:29.817 lat (usec): min=490, max=44357, avg=8218.21, stdev=9665.98 00:15:29.817 clat percentiles (usec): 00:15:29.817 | 1.00th=[ 603], 5.00th=[ 660], 10.00th=[ 701], 20.00th=[ 807], 00:15:29.817 | 30.00th=[ 996], 40.00th=[ 1401], 50.00th=[ 5211], 60.00th=[ 6063], 00:15:29.817 | 70.00th=[ 7439], 80.00th=[14484], 90.00th=[27132], 95.00th=[28705], 00:15:29.817 | 99.00th=[34341], 99.50th=[36963], 99.90th=[39584], 99.95th=[40109], 00:15:29.817 | 99.99th=[43254] 00:15:29.817 bw ( KiB/s): min=26047, max=84928, per=93.89%, avg=58231.00, stdev=17932.33, samples=9 00:15:29.817 iops : min= 6511, max=21232, avg=14557.67, stdev=4483.25, samples=9 00:15:29.817 lat (usec) : 500=0.01%, 750=7.93%, 1000=7.33% 00:15:29.817 lat (msec) : 2=5.37%, 4=0.57%, 10=16.12%, 20=53.60%, 50=9.08% 00:15:29.817 cpu : usr=99.17%, sys=0.17%, ctx=17, majf=0, minf=5577 00:15:29.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:29.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:29.817 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:29.817 00:15:29.817 Run status group 0 (all jobs): 00:15:29.817 READ: bw=33.2MiB/s (34.8MB/s), 33.2MiB/s-33.2MiB/s (34.8MB/s-34.8MB/s), io=255MiB (267MB), run=7666-7666msec 00:15:29.817 WRITE: bw=60.6MiB/s (63.5MB/s), 60.6MiB/s-60.6MiB/s (63.5MB/s-63.5MB/s), io=256MiB (268MB), run=4227-4227msec 00:15:29.817 ----------------------------------------------------- 00:15:29.817 Suppressions used: 00:15:29.817 count bytes template 00:15:29.817 1 5 /usr/src/fio/parse.c 00:15:29.817 2 192 /usr/src/fio/iolog.c 00:15:29.817 1 8 libtcmalloc_minimal.so 00:15:29.817 1 904 libcrypto.so 00:15:29.817 ----------------------------------------------------- 00:15:29.817 00:15:29.817 10:36:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:29.817 10:36:02 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:29.817 10:36:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:29.817 10:36:03 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:29.817 Remove shared memory files 00:15:29.817 10:36:03 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:29.817 10:36:03 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:29.817 10:36:03 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:29.817 10:36:03 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:29.817 10:36:03 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70757 /dev/shm/spdk_tgt_trace.pid84139 00:15:29.817 10:36:03 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:29.817 10:36:03 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:29.817 ************************************ 00:15:29.817 END TEST ftl_fio_basic 00:15:29.817 ************************************ 00:15:29.817 00:15:29.817 real 0m52.839s 00:15:29.817 user 1m58.927s 00:15:29.817 sys 0m2.336s 00:15:29.817 10:36:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.817 10:36:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:29.817 10:36:03 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:29.817 10:36:03 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:29.817 10:36:03 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.817 10:36:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:29.817 ************************************ 00:15:29.817 START TEST ftl_bdevperf 00:15:29.817 ************************************ 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:29.817 * Looking for test storage... 00:15:29.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:29.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.817 --rc genhtml_branch_coverage=1 00:15:29.817 --rc genhtml_function_coverage=1 00:15:29.817 --rc genhtml_legend=1 00:15:29.817 --rc geninfo_all_blocks=1 00:15:29.817 --rc geninfo_unexecuted_blocks=1 00:15:29.817 00:15:29.817 ' 00:15:29.817 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:29.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.818 --rc genhtml_branch_coverage=1 00:15:29.818 --rc genhtml_function_coverage=1 00:15:29.818 --rc genhtml_legend=1 00:15:29.818 --rc geninfo_all_blocks=1 00:15:29.818 --rc geninfo_unexecuted_blocks=1 00:15:29.818 00:15:29.818 ' 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:29.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.818 --rc genhtml_branch_coverage=1 00:15:29.818 --rc genhtml_function_coverage=1 00:15:29.818 --rc genhtml_legend=1 00:15:29.818 --rc geninfo_all_blocks=1 00:15:29.818 --rc geninfo_unexecuted_blocks=1 00:15:29.818 00:15:29.818 ' 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:29.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.818 --rc genhtml_branch_coverage=1 00:15:29.818 --rc genhtml_function_coverage=1 00:15:29.818 --rc genhtml_legend=1 00:15:29.818 --rc geninfo_all_blocks=1 00:15:29.818 --rc geninfo_unexecuted_blocks=1 00:15:29.818 00:15:29.818 ' 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=85922 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 85922 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 85922 ']' 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.818 10:36:03 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:29.818 [2024-09-28 10:36:03.260031] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:15:29.818 [2024-09-28 10:36:03.260288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85922 ] 00:15:29.818 [2024-09-28 10:36:03.382209] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:29.818 [2024-09-28 10:36:03.402817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.818 [2024-09-28 10:36:03.436279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:29.818 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:29.818 { 00:15:29.818 "name": "nvme0n1", 00:15:29.818 "aliases": [ 00:15:29.818 "bef78d55-71b0-4e86-a447-7e4ae9bffe1e" 00:15:29.818 ], 00:15:29.818 "product_name": "NVMe disk", 00:15:29.818 "block_size": 4096, 00:15:29.818 "num_blocks": 1310720, 00:15:29.818 "uuid": "bef78d55-71b0-4e86-a447-7e4ae9bffe1e", 00:15:29.818 "numa_id": -1, 00:15:29.818 "assigned_rate_limits": { 00:15:29.818 "rw_ios_per_sec": 0, 00:15:29.818 "rw_mbytes_per_sec": 0, 00:15:29.818 "r_mbytes_per_sec": 0, 00:15:29.818 "w_mbytes_per_sec": 0 00:15:29.818 }, 00:15:29.818 "claimed": true, 00:15:29.818 "claim_type": "read_many_write_one", 00:15:29.818 "zoned": false, 00:15:29.818 "supported_io_types": { 00:15:29.818 "read": true, 00:15:29.818 "write": true, 00:15:29.818 "unmap": true, 00:15:29.818 "flush": true, 00:15:29.818 "reset": true, 00:15:29.818 "nvme_admin": true, 00:15:29.818 "nvme_io": true, 00:15:29.818 "nvme_io_md": false, 00:15:29.818 "write_zeroes": true, 00:15:29.818 "zcopy": false, 00:15:29.818 "get_zone_info": false, 00:15:29.818 "zone_management": false, 00:15:29.818 "zone_append": false, 00:15:29.818 "compare": true, 00:15:29.818 "compare_and_write": false, 00:15:29.818 "abort": true, 00:15:29.818 "seek_hole": false, 00:15:29.818 "seek_data": false, 00:15:29.818 "copy": true, 00:15:29.818 "nvme_iov_md": false 00:15:29.818 }, 00:15:29.819 "driver_specific": { 00:15:29.819 "nvme": [ 00:15:29.819 { 00:15:29.819 "pci_address": "0000:00:11.0", 00:15:29.819 "trid": { 00:15:29.819 "trtype": "PCIe", 00:15:29.819 "traddr": "0000:00:11.0" 00:15:29.819 }, 00:15:29.819 "ctrlr_data": { 00:15:29.819 "cntlid": 0, 00:15:29.819 "vendor_id": "0x1b36", 00:15:29.819 "model_number": "QEMU NVMe Ctrl", 00:15:29.819 "serial_number": "12341", 00:15:29.819 "firmware_revision": "8.0.0", 00:15:29.819 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:29.819 "oacs": { 00:15:29.819 "security": 0, 00:15:29.819 "format": 1, 00:15:29.819 "firmware": 0, 00:15:29.819 "ns_manage": 1 00:15:29.819 }, 00:15:29.819 "multi_ctrlr": false, 00:15:29.819 "ana_reporting": false 00:15:29.819 }, 00:15:29.819 "vs": { 00:15:29.819 "nvme_version": "1.4" 00:15:29.819 }, 00:15:29.819 "ns_data": { 00:15:29.819 "id": 1, 00:15:29.819 "can_share": false 00:15:29.819 } 00:15:29.819 } 00:15:29.819 ], 00:15:29.819 "mp_policy": "active_passive" 00:15:29.819 } 00:15:29.819 } 00:15:29.819 ]' 00:15:29.819 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:30.080 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:30.080 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:30.080 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:30.080 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:30.080 10:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:15:30.080 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:30.080 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:30.080 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:30.080 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:30.080 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:30.339 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=37f38d5a-d1cf-4c6f-9908-79968f04c500 00:15:30.339 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:30.339 10:36:04 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37f38d5a-d1cf-4c6f-9908-79968f04c500 00:15:30.339 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:30.598 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f3adcfaa-b62d-4569-b67f-ba4cd01b8b02 00:15:30.598 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f3adcfaa-b62d-4569-b67f-ba4cd01b8b02 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:30.857 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:31.116 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:31.116 { 00:15:31.116 "name": "0a3aa819-69c8-4484-a3b9-e3a7837b590d", 00:15:31.116 "aliases": [ 00:15:31.116 "lvs/nvme0n1p0" 00:15:31.116 ], 00:15:31.116 "product_name": "Logical Volume", 00:15:31.116 "block_size": 4096, 00:15:31.116 "num_blocks": 26476544, 00:15:31.116 "uuid": "0a3aa819-69c8-4484-a3b9-e3a7837b590d", 00:15:31.116 "assigned_rate_limits": { 00:15:31.116 "rw_ios_per_sec": 0, 00:15:31.116 "rw_mbytes_per_sec": 0, 00:15:31.116 "r_mbytes_per_sec": 0, 00:15:31.116 "w_mbytes_per_sec": 0 00:15:31.116 }, 00:15:31.116 "claimed": false, 00:15:31.116 "zoned": false, 00:15:31.116 "supported_io_types": { 00:15:31.116 "read": true, 00:15:31.116 "write": true, 00:15:31.116 "unmap": true, 00:15:31.116 "flush": false, 00:15:31.116 "reset": true, 00:15:31.116 "nvme_admin": false, 00:15:31.116 "nvme_io": false, 00:15:31.116 "nvme_io_md": false, 00:15:31.116 "write_zeroes": true, 00:15:31.116 "zcopy": false, 00:15:31.116 "get_zone_info": false, 00:15:31.116 "zone_management": false, 00:15:31.116 "zone_append": false, 00:15:31.116 "compare": false, 00:15:31.116 "compare_and_write": false, 00:15:31.116 "abort": false, 00:15:31.116 "seek_hole": true, 00:15:31.116 "seek_data": true, 00:15:31.116 "copy": false, 00:15:31.116 "nvme_iov_md": false 00:15:31.116 }, 00:15:31.116 "driver_specific": { 00:15:31.116 "lvol": { 00:15:31.116 "lvol_store_uuid": "f3adcfaa-b62d-4569-b67f-ba4cd01b8b02", 00:15:31.116 "base_bdev": "nvme0n1", 00:15:31.116 "thin_provision": true, 00:15:31.116 "num_allocated_clusters": 0, 00:15:31.116 "snapshot": false, 00:15:31.116 "clone": false, 00:15:31.116 "esnap_clone": false 00:15:31.116 } 00:15:31.116 } 00:15:31.116 } 00:15:31.116 ]' 00:15:31.116 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:31.116 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:31.116 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:31.116 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:31.116 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:31.116 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:31.116 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:31.116 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:31.116 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:31.374 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:31.374 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:31.374 10:36:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:31.374 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:31.374 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:31.374 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:31.374 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:31.374 10:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:31.633 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:31.633 { 00:15:31.633 "name": "0a3aa819-69c8-4484-a3b9-e3a7837b590d", 00:15:31.633 "aliases": [ 00:15:31.633 "lvs/nvme0n1p0" 00:15:31.633 ], 00:15:31.633 "product_name": "Logical Volume", 00:15:31.633 "block_size": 4096, 00:15:31.633 "num_blocks": 26476544, 00:15:31.633 "uuid": "0a3aa819-69c8-4484-a3b9-e3a7837b590d", 00:15:31.633 "assigned_rate_limits": { 00:15:31.633 "rw_ios_per_sec": 0, 00:15:31.633 "rw_mbytes_per_sec": 0, 00:15:31.633 "r_mbytes_per_sec": 0, 00:15:31.633 "w_mbytes_per_sec": 0 00:15:31.633 }, 00:15:31.633 "claimed": false, 00:15:31.633 "zoned": false, 00:15:31.633 "supported_io_types": { 00:15:31.633 "read": true, 00:15:31.633 "write": true, 00:15:31.633 "unmap": true, 00:15:31.633 "flush": false, 00:15:31.633 "reset": true, 00:15:31.633 "nvme_admin": false, 00:15:31.633 "nvme_io": false, 00:15:31.633 "nvme_io_md": false, 00:15:31.633 "write_zeroes": true, 00:15:31.633 "zcopy": false, 00:15:31.633 "get_zone_info": false, 00:15:31.633 "zone_management": false, 00:15:31.633 "zone_append": false, 00:15:31.633 "compare": false, 00:15:31.633 "compare_and_write": false, 00:15:31.633 "abort": false, 00:15:31.633 "seek_hole": true, 00:15:31.633 "seek_data": true, 00:15:31.633 "copy": false, 00:15:31.633 "nvme_iov_md": false 00:15:31.633 }, 00:15:31.633 "driver_specific": { 00:15:31.633 "lvol": { 00:15:31.633 "lvol_store_uuid": "f3adcfaa-b62d-4569-b67f-ba4cd01b8b02", 00:15:31.633 "base_bdev": "nvme0n1", 00:15:31.633 "thin_provision": true, 00:15:31.633 "num_allocated_clusters": 0, 00:15:31.633 "snapshot": false, 00:15:31.633 "clone": false, 00:15:31.633 "esnap_clone": false 00:15:31.633 } 00:15:31.633 } 00:15:31.633 } 00:15:31.633 ]' 00:15:31.633 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:31.633 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:31.633 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:31.633 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:31.633 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:31.633 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:31.633 10:36:06 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:15:31.633 10:36:06 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a3aa819-69c8-4484-a3b9-e3a7837b590d 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:31.891 { 00:15:31.891 "name": "0a3aa819-69c8-4484-a3b9-e3a7837b590d", 00:15:31.891 "aliases": [ 00:15:31.891 "lvs/nvme0n1p0" 00:15:31.891 ], 00:15:31.891 "product_name": "Logical Volume", 00:15:31.891 "block_size": 4096, 00:15:31.891 "num_blocks": 26476544, 00:15:31.891 "uuid": "0a3aa819-69c8-4484-a3b9-e3a7837b590d", 00:15:31.891 "assigned_rate_limits": { 00:15:31.891 "rw_ios_per_sec": 0, 00:15:31.891 "rw_mbytes_per_sec": 0, 00:15:31.891 "r_mbytes_per_sec": 0, 00:15:31.891 "w_mbytes_per_sec": 0 00:15:31.891 }, 00:15:31.891 "claimed": false, 00:15:31.891 "zoned": false, 00:15:31.891 "supported_io_types": { 00:15:31.891 "read": true, 00:15:31.891 "write": true, 00:15:31.891 "unmap": true, 00:15:31.891 "flush": false, 00:15:31.891 "reset": true, 00:15:31.891 "nvme_admin": false, 00:15:31.891 "nvme_io": false, 00:15:31.891 "nvme_io_md": false, 00:15:31.891 "write_zeroes": true, 00:15:31.891 "zcopy": false, 00:15:31.891 "get_zone_info": false, 00:15:31.891 "zone_management": false, 00:15:31.891 "zone_append": false, 00:15:31.891 "compare": false, 00:15:31.891 "compare_and_write": false, 00:15:31.891 "abort": false, 00:15:31.891 "seek_hole": true, 00:15:31.891 "seek_data": true, 00:15:31.891 "copy": false, 00:15:31.891 "nvme_iov_md": false 00:15:31.891 }, 00:15:31.891 "driver_specific": { 00:15:31.891 "lvol": { 00:15:31.891 "lvol_store_uuid": "f3adcfaa-b62d-4569-b67f-ba4cd01b8b02", 00:15:31.891 "base_bdev": "nvme0n1", 00:15:31.891 "thin_provision": true, 00:15:31.891 "num_allocated_clusters": 0, 00:15:31.891 "snapshot": false, 00:15:31.891 "clone": false, 00:15:31.891 "esnap_clone": false 00:15:31.891 } 00:15:31.891 } 00:15:31.891 } 00:15:31.891 ]' 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:15:31.891 10:36:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0a3aa819-69c8-4484-a3b9-e3a7837b590d -c nvc0n1p0 --l2p_dram_limit 20 00:15:32.150 [2024-09-28 10:36:06.819553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.150 [2024-09-28 10:36:06.819593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:32.150 [2024-09-28 10:36:06.819605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:32.150 [2024-09-28 10:36:06.819615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.150 [2024-09-28 10:36:06.819658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.150 [2024-09-28 10:36:06.819668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:32.150 [2024-09-28 10:36:06.819674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:32.150 [2024-09-28 10:36:06.819681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.150 [2024-09-28 10:36:06.819694] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:32.150 [2024-09-28 10:36:06.819901] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:32.150 [2024-09-28 10:36:06.819912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.150 [2024-09-28 10:36:06.819921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:32.150 [2024-09-28 10:36:06.819927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:15:32.150 [2024-09-28 10:36:06.819937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.150 [2024-09-28 10:36:06.819957] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e7cb5145-2f39-41b7-8e90-a18e228efde8 00:15:32.150 [2024-09-28 10:36:06.820916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.150 [2024-09-28 10:36:06.820938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:32.150 [2024-09-28 10:36:06.820948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:32.150 [2024-09-28 10:36:06.820956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.150 [2024-09-28 10:36:06.825744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.150 [2024-09-28 10:36:06.825771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:32.150 [2024-09-28 10:36:06.825782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.724 ms 00:15:32.150 [2024-09-28 10:36:06.825788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.150 [2024-09-28 10:36:06.825854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.150 [2024-09-28 10:36:06.825861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:32.150 [2024-09-28 10:36:06.825869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:32.150 [2024-09-28 10:36:06.825877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.150 [2024-09-28 10:36:06.825917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.150 [2024-09-28 10:36:06.825926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:32.150 [2024-09-28 10:36:06.825933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:32.150 [2024-09-28 10:36:06.825939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.150 [2024-09-28 10:36:06.825957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:32.150 [2024-09-28 10:36:06.827228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.150 [2024-09-28 10:36:06.827254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:32.150 [2024-09-28 10:36:06.827262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:15:32.150 [2024-09-28 10:36:06.827271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.150 [2024-09-28 10:36:06.827294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.151 [2024-09-28 10:36:06.827306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:32.151 [2024-09-28 10:36:06.827312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:32.151 [2024-09-28 10:36:06.827319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.151 [2024-09-28 10:36:06.827330] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:32.151 [2024-09-28 10:36:06.827442] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:32.151 [2024-09-28 10:36:06.827451] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:32.151 [2024-09-28 10:36:06.827460] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:32.151 [2024-09-28 10:36:06.827468] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:32.151 [2024-09-28 10:36:06.827478] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:32.151 [2024-09-28 10:36:06.827484] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:32.151 [2024-09-28 10:36:06.827505] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:32.151 [2024-09-28 10:36:06.827514] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:32.151 [2024-09-28 10:36:06.827520] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:32.151 [2024-09-28 10:36:06.827527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.151 [2024-09-28 10:36:06.827535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:32.151 [2024-09-28 10:36:06.827541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:15:32.151 [2024-09-28 10:36:06.827549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.151 [2024-09-28 10:36:06.827612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.151 [2024-09-28 10:36:06.827622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:32.151 [2024-09-28 10:36:06.827628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:32.151 [2024-09-28 10:36:06.827634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.151 [2024-09-28 10:36:06.827703] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:32.151 [2024-09-28 10:36:06.827714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:32.151 [2024-09-28 10:36:06.827720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:32.151 [2024-09-28 10:36:06.827727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:32.151 [2024-09-28 10:36:06.827744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:32.151 [2024-09-28 10:36:06.827760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:32.151 [2024-09-28 10:36:06.827765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:32.151 [2024-09-28 10:36:06.827776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:32.151 [2024-09-28 10:36:06.827784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:32.151 [2024-09-28 10:36:06.827789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:32.151 [2024-09-28 10:36:06.827796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:32.151 [2024-09-28 10:36:06.827806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:32.151 [2024-09-28 10:36:06.827812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:32.151 [2024-09-28 10:36:06.827823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:32.151 [2024-09-28 10:36:06.827828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:32.151 [2024-09-28 10:36:06.827840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:32.151 [2024-09-28 10:36:06.827851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:32.151 [2024-09-28 10:36:06.827857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:32.151 [2024-09-28 10:36:06.827868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:32.151 [2024-09-28 10:36:06.827873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:32.151 [2024-09-28 10:36:06.827888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:32.151 [2024-09-28 10:36:06.827895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:32.151 [2024-09-28 10:36:06.827908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:32.151 [2024-09-28 10:36:06.827913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:32.151 [2024-09-28 10:36:06.827926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:32.151 [2024-09-28 10:36:06.827933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:32.151 [2024-09-28 10:36:06.827938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:32.151 [2024-09-28 10:36:06.827945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:32.151 [2024-09-28 10:36:06.827950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:32.151 [2024-09-28 10:36:06.827957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:32.151 [2024-09-28 10:36:06.827980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:32.151 [2024-09-28 10:36:06.827985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.151 [2024-09-28 10:36:06.827994] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:32.151 [2024-09-28 10:36:06.828000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:32.151 [2024-09-28 10:36:06.828007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:32.151 [2024-09-28 10:36:06.828015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.151 [2024-09-28 10:36:06.828025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:32.151 [2024-09-28 10:36:06.828032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:32.151 [2024-09-28 10:36:06.828039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:32.151 [2024-09-28 10:36:06.828045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:32.151 [2024-09-28 10:36:06.828054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:32.151 [2024-09-28 10:36:06.828060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:32.151 [2024-09-28 10:36:06.828070] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:32.151 [2024-09-28 10:36:06.828077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:32.151 [2024-09-28 10:36:06.828086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:32.151 [2024-09-28 10:36:06.828092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:32.151 [2024-09-28 10:36:06.828099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:32.151 [2024-09-28 10:36:06.828105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:32.151 [2024-09-28 10:36:06.828113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:32.151 [2024-09-28 10:36:06.828118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:32.151 [2024-09-28 10:36:06.828125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:32.151 [2024-09-28 10:36:06.828130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:32.151 [2024-09-28 10:36:06.828136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:32.151 [2024-09-28 10:36:06.828141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:32.152 [2024-09-28 10:36:06.828148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:32.152 [2024-09-28 10:36:06.828153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:32.152 [2024-09-28 10:36:06.828159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:32.152 [2024-09-28 10:36:06.828165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:32.152 [2024-09-28 10:36:06.828171] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:32.152 [2024-09-28 10:36:06.828180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:32.152 [2024-09-28 10:36:06.828188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:32.152 [2024-09-28 10:36:06.828193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:32.152 [2024-09-28 10:36:06.828201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:32.152 [2024-09-28 10:36:06.828206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:32.152 [2024-09-28 10:36:06.828214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.152 [2024-09-28 10:36:06.828220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:32.152 [2024-09-28 10:36:06.828226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:15:32.152 [2024-09-28 10:36:06.828232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.152 [2024-09-28 10:36:06.828256] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:32.152 [2024-09-28 10:36:06.828263] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:34.686 [2024-09-28 10:36:08.920032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:08.920250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:34.686 [2024-09-28 10:36:08.920275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2091.763 ms 00:15:34.686 [2024-09-28 10:36:08.920287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:08.937740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:08.937785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:34.686 [2024-09-28 10:36:08.937802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.324 ms 00:15:34.686 [2024-09-28 10:36:08.937811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:08.937908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:08.937919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:34.686 [2024-09-28 10:36:08.937929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:15:34.686 [2024-09-28 10:36:08.937939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:08.945857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:08.945890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:34.686 [2024-09-28 10:36:08.945901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.849 ms 00:15:34.686 [2024-09-28 10:36:08.945909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:08.946181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:08.946215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:34.686 [2024-09-28 10:36:08.946240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:15:34.686 [2024-09-28 10:36:08.946259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:08.946628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:08.946724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:34.686 [2024-09-28 10:36:08.946743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:15:34.686 [2024-09-28 10:36:08.946750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:08.946854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:08.946863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:34.686 [2024-09-28 10:36:08.946872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:15:34.686 [2024-09-28 10:36:08.946881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:08.951332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:08.951361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:34.686 [2024-09-28 10:36:08.951376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.433 ms 00:15:34.686 [2024-09-28 10:36:08.951384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:08.959579] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:34.686 [2024-09-28 10:36:08.964487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:08.964518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:34.686 [2024-09-28 10:36:08.964529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.043 ms 00:15:34.686 [2024-09-28 10:36:08.964538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.017051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.017221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:34.686 [2024-09-28 10:36:09.017238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.493 ms 00:15:34.686 [2024-09-28 10:36:09.017248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.017424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.017437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:34.686 [2024-09-28 10:36:09.017445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:15:34.686 [2024-09-28 10:36:09.017454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.021168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.021204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:34.686 [2024-09-28 10:36:09.021214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:15:34.686 [2024-09-28 10:36:09.021223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.024514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.024629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:34.686 [2024-09-28 10:36:09.024644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:15:34.686 [2024-09-28 10:36:09.024653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.024943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.024982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:34.686 [2024-09-28 10:36:09.024992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:15:34.686 [2024-09-28 10:36:09.025001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.053246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.053284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:34.686 [2024-09-28 10:36:09.053295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.216 ms 00:15:34.686 [2024-09-28 10:36:09.053303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.057990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.058025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:34.686 [2024-09-28 10:36:09.058035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.648 ms 00:15:34.686 [2024-09-28 10:36:09.058044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.061799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.061833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:34.686 [2024-09-28 10:36:09.061842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:15:34.686 [2024-09-28 10:36:09.061850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.066186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.066224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:34.686 [2024-09-28 10:36:09.066233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:15:34.686 [2024-09-28 10:36:09.066242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.066277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.066291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:34.686 [2024-09-28 10:36:09.066299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:34.686 [2024-09-28 10:36:09.066312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.686 [2024-09-28 10:36:09.066374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.686 [2024-09-28 10:36:09.066384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:34.687 [2024-09-28 10:36:09.066392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:34.687 [2024-09-28 10:36:09.066400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.687 [2024-09-28 10:36:09.067219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2247.258 ms, result 0 00:15:34.687 { 00:15:34.687 "name": "ftl0", 00:15:34.687 "uuid": "e7cb5145-2f39-41b7-8e90-a18e228efde8" 00:15:34.687 } 00:15:34.687 10:36:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:34.687 10:36:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:15:34.687 10:36:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:15:34.687 10:36:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:34.687 [2024-09-28 10:36:09.380269] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:34.687 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:34.687 Zero copy mechanism will not be used. 00:15:34.687 Running I/O for 4 seconds... 00:15:38.902 848.00 IOPS, 56.31 MiB/s 1143.50 IOPS, 75.94 MiB/s 1197.67 IOPS, 79.53 MiB/s 1234.25 IOPS, 81.96 MiB/s 00:15:38.902 Latency(us) 00:15:38.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.902 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:38.902 ftl0 : 4.00 1233.78 81.93 0.00 0.00 853.15 222.13 2545.82 00:15:38.902 =================================================================================================================== 00:15:38.902 Total : 1233.78 81.93 0.00 0.00 853.15 222.13 2545.82 00:15:38.902 { 00:15:38.902 "results": [ 00:15:38.902 { 00:15:38.902 "job": "ftl0", 00:15:38.902 "core_mask": "0x1", 00:15:38.902 "workload": "randwrite", 00:15:38.902 "status": "finished", 00:15:38.902 "queue_depth": 1, 00:15:38.902 "io_size": 69632, 00:15:38.902 "runtime": 4.002332, 00:15:38.902 "iops": 1233.7807058484902, 00:15:38.902 "mibps": 81.9307499977513, 00:15:38.902 "io_failed": 0, 00:15:38.902 "io_timeout": 0, 00:15:38.902 "avg_latency_us": 853.1454229367231, 00:15:38.902 "min_latency_us": 222.12923076923076, 00:15:38.902 "max_latency_us": 2545.8215384615382 00:15:38.902 } 00:15:38.902 ], 00:15:38.902 "core_count": 1 00:15:38.902 } 00:15:38.902 [2024-09-28 10:36:13.388710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:38.902 10:36:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:38.902 [2024-09-28 10:36:13.489334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:38.902 Running I/O for 4 seconds... 00:15:43.117 7063.00 IOPS, 27.59 MiB/s 6295.50 IOPS, 24.59 MiB/s 5965.00 IOPS, 23.30 MiB/s 5818.50 IOPS, 22.73 MiB/s 00:15:43.117 Latency(us) 00:15:43.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.117 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:43.117 ftl0 : 4.03 5805.45 22.68 0.00 0.00 21971.62 245.76 50412.31 00:15:43.117 =================================================================================================================== 00:15:43.117 Total : 5805.45 22.68 0.00 0.00 21971.62 0.00 50412.31 00:15:43.117 [2024-09-28 10:36:17.526216] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:43.117 { 00:15:43.117 "results": [ 00:15:43.117 { 00:15:43.117 "job": "ftl0", 00:15:43.117 "core_mask": "0x1", 00:15:43.117 "workload": "randwrite", 00:15:43.117 "status": "finished", 00:15:43.117 "queue_depth": 128, 00:15:43.117 "io_size": 4096, 00:15:43.117 "runtime": 4.030351, 00:15:43.117 "iops": 5805.449699046063, 00:15:43.117 "mibps": 22.677537886898683, 00:15:43.117 "io_failed": 0, 00:15:43.117 "io_timeout": 0, 00:15:43.117 "avg_latency_us": 21971.621257832685, 00:15:43.117 "min_latency_us": 245.76, 00:15:43.117 "max_latency_us": 50412.307692307695 00:15:43.117 } 00:15:43.117 ], 00:15:43.117 "core_count": 1 00:15:43.117 } 00:15:43.117 10:36:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:43.117 [2024-09-28 10:36:17.630656] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:43.117 Running I/O for 4 seconds... 00:15:46.891 5100.00 IOPS, 19.92 MiB/s 5078.00 IOPS, 19.84 MiB/s 5069.00 IOPS, 19.80 MiB/s 5029.50 IOPS, 19.65 MiB/s 00:15:46.891 Latency(us) 00:15:46.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.891 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.891 Verification LBA range: start 0x0 length 0x1400000 00:15:46.891 ftl0 : 4.02 5035.47 19.67 0.00 0.00 25340.93 381.24 42144.69 00:15:46.891 =================================================================================================================== 00:15:46.891 Total : 5035.47 19.67 0.00 0.00 25340.93 0.00 42144.69 00:15:46.891 [2024-09-28 10:36:21.658793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:15:46.891 "results": [ 00:15:46.891 { 00:15:46.891 "job": "ftl0", 00:15:46.891 "core_mask": "0x1", 00:15:46.891 "workload": "verify", 00:15:46.891 "status": "finished", 00:15:46.891 "verify_range": { 00:15:46.891 "start": 0, 00:15:46.891 "length": 20971520 00:15:46.891 }, 00:15:46.891 "queue_depth": 128, 00:15:46.891 "io_size": 4096, 00:15:46.891 "runtime": 4.020676, 00:15:46.891 "iops": 5035.471647056365, 00:15:46.892 "mibps": 19.669811121313927, 00:15:46.892 "io_failed": 0, 00:15:46.892 "io_timeout": 0, 00:15:46.892 "avg_latency_us": 25340.93404600339, 00:15:46.892 "min_latency_us": 381.2430769230769, 00:15:46.892 "max_latency_us": 42144.68923076923 00:15:46.892 } 00:15:46.892 ], 00:15:46.892 "core_count": 1 00:15:46.892 } 00:15:46.892 l0 00:15:47.151 10:36:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:47.151 [2024-09-28 10:36:21.864039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.151 [2024-09-28 10:36:21.864218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:47.151 [2024-09-28 10:36:21.864286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:47.151 [2024-09-28 10:36:21.864352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.151 [2024-09-28 10:36:21.864396] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:47.151 [2024-09-28 10:36:21.864886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.151 [2024-09-28 10:36:21.865007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:47.151 [2024-09-28 10:36:21.865069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:15:47.151 [2024-09-28 10:36:21.865122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.151 [2024-09-28 10:36:21.866946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.151 [2024-09-28 10:36:21.867063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:47.151 [2024-09-28 10:36:21.867125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.780 ms 00:15:47.151 [2024-09-28 10:36:21.867148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.411 [2024-09-28 10:36:22.004185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.411 [2024-09-28 10:36:22.004318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:47.411 [2024-09-28 10:36:22.004339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 137.001 ms 00:15:47.411 [2024-09-28 10:36:22.004350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.411 [2024-09-28 10:36:22.010470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.411 [2024-09-28 10:36:22.010499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:47.411 [2024-09-28 10:36:22.010511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.089 ms 00:15:47.411 [2024-09-28 10:36:22.010525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.411 [2024-09-28 10:36:22.011642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.411 [2024-09-28 10:36:22.011673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:47.411 [2024-09-28 10:36:22.011684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:15:47.411 [2024-09-28 10:36:22.011691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.411 [2024-09-28 10:36:22.015281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.411 [2024-09-28 10:36:22.015316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:47.411 [2024-09-28 10:36:22.015331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.557 ms 00:15:47.411 [2024-09-28 10:36:22.015338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.411 [2024-09-28 10:36:22.015446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.411 [2024-09-28 10:36:22.015464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:47.411 [2024-09-28 10:36:22.015474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:15:47.411 [2024-09-28 10:36:22.015481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.411 [2024-09-28 10:36:22.017215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.411 [2024-09-28 10:36:22.017326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:47.411 [2024-09-28 10:36:22.017343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:15:47.411 [2024-09-28 10:36:22.017351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.411 [2024-09-28 10:36:22.018666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.411 [2024-09-28 10:36:22.018695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:47.411 [2024-09-28 10:36:22.018705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:15:47.411 [2024-09-28 10:36:22.018712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.411 [2024-09-28 10:36:22.019785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.411 [2024-09-28 10:36:22.019814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:47.411 [2024-09-28 10:36:22.019827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:15:47.411 [2024-09-28 10:36:22.019834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.411 [2024-09-28 10:36:22.020857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.411 [2024-09-28 10:36:22.020957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:47.411 [2024-09-28 10:36:22.020988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:15:47.411 [2024-09-28 10:36:22.020995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.411 [2024-09-28 10:36:22.021022] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:47.411 [2024-09-28 10:36:22.021035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:47.411 [2024-09-28 10:36:22.021133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:47.412 [2024-09-28 10:36:22.021873] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:47.412 [2024-09-28 10:36:22.021884] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e7cb5145-2f39-41b7-8e90-a18e228efde8 00:15:47.412 [2024-09-28 10:36:22.021891] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:47.412 [2024-09-28 10:36:22.021899] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:47.413 [2024-09-28 10:36:22.021906] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:47.413 [2024-09-28 10:36:22.021917] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:47.413 [2024-09-28 10:36:22.021923] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:47.413 [2024-09-28 10:36:22.021932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:47.413 [2024-09-28 10:36:22.021939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:47.413 [2024-09-28 10:36:22.021946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:47.413 [2024-09-28 10:36:22.021953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:47.413 [2024-09-28 10:36:22.021972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.413 [2024-09-28 10:36:22.021982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:47.413 [2024-09-28 10:36:22.021992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:15:47.413 [2024-09-28 10:36:22.021998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.023323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.413 [2024-09-28 10:36:22.023338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:47.413 [2024-09-28 10:36:22.023348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:15:47.413 [2024-09-28 10:36:22.023356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.023426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.413 [2024-09-28 10:36:22.023434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:47.413 [2024-09-28 10:36:22.023447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:47.413 [2024-09-28 10:36:22.023473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.028069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.028168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:47.413 [2024-09-28 10:36:22.028224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.028246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.028310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.028396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:47.413 [2024-09-28 10:36:22.028424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.028444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.028511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.028585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:47.413 [2024-09-28 10:36:22.028623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.028642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.028670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.028695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:47.413 [2024-09-28 10:36:22.028718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.028737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.036974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.037109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:47.413 [2024-09-28 10:36:22.037163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.037185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.044477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.044618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:47.413 [2024-09-28 10:36:22.044673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.044698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.044869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.044942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:47.413 [2024-09-28 10:36:22.045011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.045059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.045111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.045176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:47.413 [2024-09-28 10:36:22.045204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.045249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.045339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.045370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:47.413 [2024-09-28 10:36:22.045424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.045446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.045492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.045568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:47.413 [2024-09-28 10:36:22.045601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.045621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.045668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.045699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:47.413 [2024-09-28 10:36:22.045720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.045739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.045792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.413 [2024-09-28 10:36:22.045819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:47.413 [2024-09-28 10:36:22.045842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.413 [2024-09-28 10:36:22.045919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.413 [2024-09-28 10:36:22.046068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 181.990 ms, result 0 00:15:47.413 true 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 85922 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 85922 ']' 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 85922 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85922 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85922' 00:15:47.413 killing process with pid 85922 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 85922 00:15:47.413 Received shutdown signal, test time was about 4.000000 seconds 00:15:47.413 00:15:47.413 Latency(us) 00:15:47.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.413 =================================================================================================================== 00:15:47.413 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.413 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 85922 00:15:47.671 10:36:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:47.671 10:36:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:15:47.671 10:36:22 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:47.671 Remove shared memory files 00:15:47.671 10:36:22 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:15:47.671 10:36:22 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:15:47.671 10:36:22 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:15:47.671 10:36:22 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:47.671 10:36:22 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:15:47.671 ************************************ 00:15:47.671 END TEST ftl_bdevperf 00:15:47.671 ************************************ 00:15:47.671 00:15:47.671 real 0m19.283s 00:15:47.671 user 0m21.918s 00:15:47.671 sys 0m0.749s 00:15:47.671 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.671 10:36:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:47.671 10:36:22 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:15:47.671 10:36:22 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:47.671 10:36:22 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.671 10:36:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:47.671 ************************************ 00:15:47.671 START TEST ftl_trim 00:15:47.671 ************************************ 00:15:47.671 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:15:47.671 * Looking for test storage... 00:15:47.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:47.671 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:47.671 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:15:47.671 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:47.930 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.930 10:36:22 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:15:47.930 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.930 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:47.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.930 --rc genhtml_branch_coverage=1 00:15:47.930 --rc genhtml_function_coverage=1 00:15:47.930 --rc genhtml_legend=1 00:15:47.930 --rc geninfo_all_blocks=1 00:15:47.930 --rc geninfo_unexecuted_blocks=1 00:15:47.930 00:15:47.930 ' 00:15:47.930 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:47.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.930 --rc genhtml_branch_coverage=1 00:15:47.930 --rc genhtml_function_coverage=1 00:15:47.930 --rc genhtml_legend=1 00:15:47.930 --rc geninfo_all_blocks=1 00:15:47.930 --rc geninfo_unexecuted_blocks=1 00:15:47.930 00:15:47.930 ' 00:15:47.930 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:47.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.930 --rc genhtml_branch_coverage=1 00:15:47.930 --rc genhtml_function_coverage=1 00:15:47.930 --rc genhtml_legend=1 00:15:47.930 --rc geninfo_all_blocks=1 00:15:47.930 --rc geninfo_unexecuted_blocks=1 00:15:47.930 00:15:47.930 ' 00:15:47.930 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:47.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.930 --rc genhtml_branch_coverage=1 00:15:47.930 --rc genhtml_function_coverage=1 00:15:47.930 --rc genhtml_legend=1 00:15:47.930 --rc geninfo_all_blocks=1 00:15:47.930 --rc geninfo_unexecuted_blocks=1 00:15:47.930 00:15:47.930 ' 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:15:47.930 10:36:22 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:47.931 10:36:22 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:47.931 10:36:22 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:47.931 10:36:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:47.931 10:36:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:47.931 10:36:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:47.931 10:36:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:47.931 10:36:22 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:47.931 10:36:22 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=86245 00:15:47.931 10:36:22 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 86245 00:15:47.931 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86245 ']' 00:15:47.931 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.931 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:47.931 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.931 10:36:22 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:47.931 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:47.931 10:36:22 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:15:47.931 [2024-09-28 10:36:22.610031] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:15:47.931 [2024-09-28 10:36:22.610287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86245 ] 00:15:48.219 [2024-09-28 10:36:22.738738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:48.219 [2024-09-28 10:36:22.756761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.219 [2024-09-28 10:36:22.789600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.219 [2024-09-28 10:36:22.789867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.219 [2024-09-28 10:36:22.789929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.818 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:48.818 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:15:48.818 10:36:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:48.818 10:36:23 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:15:48.818 10:36:23 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:48.818 10:36:23 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:15:48.818 10:36:23 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:15:48.818 10:36:23 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:49.080 10:36:23 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:49.080 10:36:23 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:15:49.080 10:36:23 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:49.080 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:49.080 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:49.080 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:15:49.080 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:15:49.080 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:49.342 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:49.342 { 00:15:49.342 "name": "nvme0n1", 00:15:49.342 "aliases": [ 00:15:49.342 "12d2f071-3c81-4f77-b854-792073a9345e" 00:15:49.342 ], 00:15:49.342 "product_name": "NVMe disk", 00:15:49.342 "block_size": 4096, 00:15:49.342 "num_blocks": 1310720, 00:15:49.342 "uuid": "12d2f071-3c81-4f77-b854-792073a9345e", 00:15:49.342 "numa_id": -1, 00:15:49.342 "assigned_rate_limits": { 00:15:49.342 "rw_ios_per_sec": 0, 00:15:49.342 "rw_mbytes_per_sec": 0, 00:15:49.342 "r_mbytes_per_sec": 0, 00:15:49.342 "w_mbytes_per_sec": 0 00:15:49.342 }, 00:15:49.342 "claimed": true, 00:15:49.342 "claim_type": "read_many_write_one", 00:15:49.342 "zoned": false, 00:15:49.342 "supported_io_types": { 00:15:49.342 "read": true, 00:15:49.342 "write": true, 00:15:49.342 "unmap": true, 00:15:49.342 "flush": true, 00:15:49.342 "reset": true, 00:15:49.342 "nvme_admin": true, 00:15:49.342 "nvme_io": true, 00:15:49.342 "nvme_io_md": false, 00:15:49.342 "write_zeroes": true, 00:15:49.342 "zcopy": false, 00:15:49.342 "get_zone_info": false, 00:15:49.342 "zone_management": false, 00:15:49.342 "zone_append": false, 00:15:49.342 "compare": true, 00:15:49.342 "compare_and_write": false, 00:15:49.342 "abort": true, 00:15:49.342 "seek_hole": false, 00:15:49.342 "seek_data": false, 00:15:49.342 "copy": true, 00:15:49.342 "nvme_iov_md": false 00:15:49.342 }, 00:15:49.342 "driver_specific": { 00:15:49.342 "nvme": [ 00:15:49.342 { 00:15:49.342 "pci_address": "0000:00:11.0", 00:15:49.342 "trid": { 00:15:49.342 "trtype": "PCIe", 00:15:49.342 "traddr": "0000:00:11.0" 00:15:49.342 }, 00:15:49.342 "ctrlr_data": { 00:15:49.342 "cntlid": 0, 00:15:49.342 "vendor_id": "0x1b36", 00:15:49.342 "model_number": "QEMU NVMe Ctrl", 00:15:49.342 "serial_number": "12341", 00:15:49.342 "firmware_revision": "8.0.0", 00:15:49.342 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:49.342 "oacs": { 00:15:49.342 "security": 0, 00:15:49.342 "format": 1, 00:15:49.342 "firmware": 0, 00:15:49.342 "ns_manage": 1 00:15:49.342 }, 00:15:49.342 "multi_ctrlr": false, 00:15:49.342 "ana_reporting": false 00:15:49.342 }, 00:15:49.342 "vs": { 00:15:49.342 "nvme_version": "1.4" 00:15:49.342 }, 00:15:49.342 "ns_data": { 00:15:49.342 "id": 1, 00:15:49.342 "can_share": false 00:15:49.342 } 00:15:49.342 } 00:15:49.342 ], 00:15:49.342 "mp_policy": "active_passive" 00:15:49.342 } 00:15:49.342 } 00:15:49.342 ]' 00:15:49.342 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:49.342 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:15:49.342 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:49.342 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:49.342 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:49.342 10:36:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:15:49.342 10:36:23 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:15:49.342 10:36:23 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:49.342 10:36:23 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:15:49.342 10:36:23 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:49.342 10:36:23 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:49.603 10:36:24 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f3adcfaa-b62d-4569-b67f-ba4cd01b8b02 00:15:49.603 10:36:24 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:15:49.603 10:36:24 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3adcfaa-b62d-4569-b67f-ba4cd01b8b02 00:15:49.864 10:36:24 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:49.864 10:36:24 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a9484171-b294-4ed4-a768-f4d5c2ae4f8c 00:15:49.864 10:36:24 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a9484171-b294-4ed4-a768-f4d5c2ae4f8c 00:15:50.124 10:36:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:50.124 10:36:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:50.124 10:36:24 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:15:50.124 10:36:24 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:50.124 10:36:24 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:50.124 10:36:24 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:15:50.125 10:36:24 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:50.125 10:36:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:50.125 10:36:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:50.125 10:36:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:15:50.125 10:36:24 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:15:50.125 10:36:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:50.385 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:50.385 { 00:15:50.385 "name": "aaaa1daa-5fd8-430e-85e6-fd2a43c95027", 00:15:50.385 "aliases": [ 00:15:50.385 "lvs/nvme0n1p0" 00:15:50.385 ], 00:15:50.385 "product_name": "Logical Volume", 00:15:50.385 "block_size": 4096, 00:15:50.385 "num_blocks": 26476544, 00:15:50.385 "uuid": "aaaa1daa-5fd8-430e-85e6-fd2a43c95027", 00:15:50.385 "assigned_rate_limits": { 00:15:50.385 "rw_ios_per_sec": 0, 00:15:50.385 "rw_mbytes_per_sec": 0, 00:15:50.385 "r_mbytes_per_sec": 0, 00:15:50.385 "w_mbytes_per_sec": 0 00:15:50.385 }, 00:15:50.385 "claimed": false, 00:15:50.385 "zoned": false, 00:15:50.385 "supported_io_types": { 00:15:50.385 "read": true, 00:15:50.385 "write": true, 00:15:50.385 "unmap": true, 00:15:50.385 "flush": false, 00:15:50.385 "reset": true, 00:15:50.385 "nvme_admin": false, 00:15:50.385 "nvme_io": false, 00:15:50.385 "nvme_io_md": false, 00:15:50.385 "write_zeroes": true, 00:15:50.385 "zcopy": false, 00:15:50.385 "get_zone_info": false, 00:15:50.385 "zone_management": false, 00:15:50.385 "zone_append": false, 00:15:50.385 "compare": false, 00:15:50.385 "compare_and_write": false, 00:15:50.385 "abort": false, 00:15:50.385 "seek_hole": true, 00:15:50.385 "seek_data": true, 00:15:50.385 "copy": false, 00:15:50.385 "nvme_iov_md": false 00:15:50.385 }, 00:15:50.385 "driver_specific": { 00:15:50.385 "lvol": { 00:15:50.385 "lvol_store_uuid": "a9484171-b294-4ed4-a768-f4d5c2ae4f8c", 00:15:50.385 "base_bdev": "nvme0n1", 00:15:50.385 "thin_provision": true, 00:15:50.385 "num_allocated_clusters": 0, 00:15:50.385 "snapshot": false, 00:15:50.385 "clone": false, 00:15:50.385 "esnap_clone": false 00:15:50.385 } 00:15:50.385 } 00:15:50.385 } 00:15:50.385 ]' 00:15:50.385 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:50.385 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:15:50.385 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:50.385 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:50.385 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:50.385 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:15:50.385 10:36:25 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:15:50.385 10:36:25 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:15:50.385 10:36:25 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:50.646 10:36:25 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:50.646 10:36:25 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:50.646 10:36:25 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:50.646 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:50.646 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:50.646 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:15:50.646 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:15:50.646 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:50.905 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:50.905 { 00:15:50.905 "name": "aaaa1daa-5fd8-430e-85e6-fd2a43c95027", 00:15:50.905 "aliases": [ 00:15:50.905 "lvs/nvme0n1p0" 00:15:50.905 ], 00:15:50.905 "product_name": "Logical Volume", 00:15:50.905 "block_size": 4096, 00:15:50.905 "num_blocks": 26476544, 00:15:50.905 "uuid": "aaaa1daa-5fd8-430e-85e6-fd2a43c95027", 00:15:50.905 "assigned_rate_limits": { 00:15:50.905 "rw_ios_per_sec": 0, 00:15:50.905 "rw_mbytes_per_sec": 0, 00:15:50.905 "r_mbytes_per_sec": 0, 00:15:50.905 "w_mbytes_per_sec": 0 00:15:50.905 }, 00:15:50.905 "claimed": false, 00:15:50.905 "zoned": false, 00:15:50.905 "supported_io_types": { 00:15:50.905 "read": true, 00:15:50.905 "write": true, 00:15:50.905 "unmap": true, 00:15:50.905 "flush": false, 00:15:50.905 "reset": true, 00:15:50.905 "nvme_admin": false, 00:15:50.905 "nvme_io": false, 00:15:50.905 "nvme_io_md": false, 00:15:50.905 "write_zeroes": true, 00:15:50.905 "zcopy": false, 00:15:50.905 "get_zone_info": false, 00:15:50.905 "zone_management": false, 00:15:50.905 "zone_append": false, 00:15:50.905 "compare": false, 00:15:50.905 "compare_and_write": false, 00:15:50.905 "abort": false, 00:15:50.905 "seek_hole": true, 00:15:50.905 "seek_data": true, 00:15:50.905 "copy": false, 00:15:50.905 "nvme_iov_md": false 00:15:50.905 }, 00:15:50.905 "driver_specific": { 00:15:50.905 "lvol": { 00:15:50.905 "lvol_store_uuid": "a9484171-b294-4ed4-a768-f4d5c2ae4f8c", 00:15:50.905 "base_bdev": "nvme0n1", 00:15:50.905 "thin_provision": true, 00:15:50.905 "num_allocated_clusters": 0, 00:15:50.905 "snapshot": false, 00:15:50.905 "clone": false, 00:15:50.905 "esnap_clone": false 00:15:50.905 } 00:15:50.905 } 00:15:50.905 } 00:15:50.905 ]' 00:15:50.905 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:50.905 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:15:50.905 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:50.905 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:50.905 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:50.905 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:15:50.905 10:36:25 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:15:50.905 10:36:25 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:51.164 10:36:25 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:15:51.164 10:36:25 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:15:51.164 10:36:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:51.164 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:51.164 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:51.164 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:15:51.164 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:15:51.164 10:36:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aaaa1daa-5fd8-430e-85e6-fd2a43c95027 00:15:51.423 10:36:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:51.423 { 00:15:51.423 "name": "aaaa1daa-5fd8-430e-85e6-fd2a43c95027", 00:15:51.423 "aliases": [ 00:15:51.423 "lvs/nvme0n1p0" 00:15:51.423 ], 00:15:51.423 "product_name": "Logical Volume", 00:15:51.423 "block_size": 4096, 00:15:51.423 "num_blocks": 26476544, 00:15:51.423 "uuid": "aaaa1daa-5fd8-430e-85e6-fd2a43c95027", 00:15:51.423 "assigned_rate_limits": { 00:15:51.423 "rw_ios_per_sec": 0, 00:15:51.423 "rw_mbytes_per_sec": 0, 00:15:51.423 "r_mbytes_per_sec": 0, 00:15:51.423 "w_mbytes_per_sec": 0 00:15:51.423 }, 00:15:51.423 "claimed": false, 00:15:51.423 "zoned": false, 00:15:51.423 "supported_io_types": { 00:15:51.423 "read": true, 00:15:51.423 "write": true, 00:15:51.423 "unmap": true, 00:15:51.423 "flush": false, 00:15:51.423 "reset": true, 00:15:51.423 "nvme_admin": false, 00:15:51.423 "nvme_io": false, 00:15:51.423 "nvme_io_md": false, 00:15:51.423 "write_zeroes": true, 00:15:51.423 "zcopy": false, 00:15:51.423 "get_zone_info": false, 00:15:51.423 "zone_management": false, 00:15:51.423 "zone_append": false, 00:15:51.423 "compare": false, 00:15:51.423 "compare_and_write": false, 00:15:51.423 "abort": false, 00:15:51.423 "seek_hole": true, 00:15:51.423 "seek_data": true, 00:15:51.423 "copy": false, 00:15:51.423 "nvme_iov_md": false 00:15:51.423 }, 00:15:51.423 "driver_specific": { 00:15:51.423 "lvol": { 00:15:51.423 "lvol_store_uuid": "a9484171-b294-4ed4-a768-f4d5c2ae4f8c", 00:15:51.423 "base_bdev": "nvme0n1", 00:15:51.423 "thin_provision": true, 00:15:51.423 "num_allocated_clusters": 0, 00:15:51.423 "snapshot": false, 00:15:51.423 "clone": false, 00:15:51.423 "esnap_clone": false 00:15:51.423 } 00:15:51.423 } 00:15:51.423 } 00:15:51.423 ]' 00:15:51.423 10:36:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:51.423 10:36:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:15:51.423 10:36:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:51.423 10:36:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:51.423 10:36:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:51.423 10:36:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:15:51.423 10:36:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:15:51.423 10:36:26 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aaaa1daa-5fd8-430e-85e6-fd2a43c95027 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:15:51.682 [2024-09-28 10:36:26.310743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.682 [2024-09-28 10:36:26.310787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:51.682 [2024-09-28 10:36:26.310810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:51.682 [2024-09-28 10:36:26.310819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.682 [2024-09-28 10:36:26.313181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.682 [2024-09-28 10:36:26.313215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:51.682 [2024-09-28 10:36:26.313229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.336 ms 00:15:51.683 [2024-09-28 10:36:26.313237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.683 [2024-09-28 10:36:26.313337] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:51.683 [2024-09-28 10:36:26.313575] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:51.683 [2024-09-28 10:36:26.313603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.683 [2024-09-28 10:36:26.313611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:51.683 [2024-09-28 10:36:26.313622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:15:51.683 [2024-09-28 10:36:26.313629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.683 [2024-09-28 10:36:26.313953] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1563e554-b486-4792-9f1b-1b87095bfb08 00:15:51.683 [2024-09-28 10:36:26.315001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.683 [2024-09-28 10:36:26.315036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:51.683 [2024-09-28 10:36:26.315047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:15:51.683 [2024-09-28 10:36:26.315057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.683 [2024-09-28 10:36:26.319973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.683 [2024-09-28 10:36:26.319995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:51.683 [2024-09-28 10:36:26.320004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.847 ms 00:15:51.683 [2024-09-28 10:36:26.320016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.683 [2024-09-28 10:36:26.320138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.683 [2024-09-28 10:36:26.320151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:51.683 [2024-09-28 10:36:26.320159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:15:51.683 [2024-09-28 10:36:26.320168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.683 [2024-09-28 10:36:26.320202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.683 [2024-09-28 10:36:26.320212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:51.683 [2024-09-28 10:36:26.320220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:51.683 [2024-09-28 10:36:26.320229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.683 [2024-09-28 10:36:26.320281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:51.683 [2024-09-28 10:36:26.321694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.683 [2024-09-28 10:36:26.321724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:51.683 [2024-09-28 10:36:26.321735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:15:51.683 [2024-09-28 10:36:26.321742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.683 [2024-09-28 10:36:26.321786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.683 [2024-09-28 10:36:26.321794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:51.683 [2024-09-28 10:36:26.321805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:51.683 [2024-09-28 10:36:26.321822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.683 [2024-09-28 10:36:26.321850] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:51.683 [2024-09-28 10:36:26.322002] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:51.683 [2024-09-28 10:36:26.322025] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:51.683 [2024-09-28 10:36:26.322036] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:51.683 [2024-09-28 10:36:26.322048] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322056] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322068] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:51.683 [2024-09-28 10:36:26.322075] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:51.683 [2024-09-28 10:36:26.322084] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:51.683 [2024-09-28 10:36:26.322090] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:51.683 [2024-09-28 10:36:26.322099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.683 [2024-09-28 10:36:26.322106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:51.683 [2024-09-28 10:36:26.322118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:15:51.683 [2024-09-28 10:36:26.322125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.683 [2024-09-28 10:36:26.322222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.683 [2024-09-28 10:36:26.322231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:51.683 [2024-09-28 10:36:26.322249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:15:51.683 [2024-09-28 10:36:26.322256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.683 [2024-09-28 10:36:26.322370] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:51.683 [2024-09-28 10:36:26.322389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:51.683 [2024-09-28 10:36:26.322399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:51.683 [2024-09-28 10:36:26.322435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:51.683 [2024-09-28 10:36:26.322463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:51.683 [2024-09-28 10:36:26.322479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:51.683 [2024-09-28 10:36:26.322487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:51.683 [2024-09-28 10:36:26.322497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:51.683 [2024-09-28 10:36:26.322505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:51.683 [2024-09-28 10:36:26.322514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:15:51.683 [2024-09-28 10:36:26.322521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:51.683 [2024-09-28 10:36:26.322538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:51.683 [2024-09-28 10:36:26.322571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:51.683 [2024-09-28 10:36:26.322595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:51.683 [2024-09-28 10:36:26.322621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:51.683 [2024-09-28 10:36:26.322646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:51.683 [2024-09-28 10:36:26.322672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:51.683 [2024-09-28 10:36:26.322688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:51.683 [2024-09-28 10:36:26.322696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:15:51.683 [2024-09-28 10:36:26.322704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:51.683 [2024-09-28 10:36:26.322712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:51.683 [2024-09-28 10:36:26.322721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:15:51.683 [2024-09-28 10:36:26.322728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:51.683 [2024-09-28 10:36:26.322744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:15:51.683 [2024-09-28 10:36:26.322753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322760] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:51.683 [2024-09-28 10:36:26.322771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:51.683 [2024-09-28 10:36:26.322786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.683 [2024-09-28 10:36:26.322802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:51.683 [2024-09-28 10:36:26.322810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:51.683 [2024-09-28 10:36:26.322816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:51.683 [2024-09-28 10:36:26.322824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:51.683 [2024-09-28 10:36:26.322830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:51.683 [2024-09-28 10:36:26.322839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:51.683 [2024-09-28 10:36:26.322848] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:51.684 [2024-09-28 10:36:26.322859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:51.684 [2024-09-28 10:36:26.322867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:51.684 [2024-09-28 10:36:26.322877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:15:51.684 [2024-09-28 10:36:26.322887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:15:51.684 [2024-09-28 10:36:26.322896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:15:51.684 [2024-09-28 10:36:26.322903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:15:51.684 [2024-09-28 10:36:26.322913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:15:51.684 [2024-09-28 10:36:26.322920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:15:51.684 [2024-09-28 10:36:26.322929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:15:51.684 [2024-09-28 10:36:26.322935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:15:51.684 [2024-09-28 10:36:26.322944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:15:51.684 [2024-09-28 10:36:26.322950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:15:51.684 [2024-09-28 10:36:26.322969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:15:51.684 [2024-09-28 10:36:26.322977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:15:51.684 [2024-09-28 10:36:26.322986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:15:51.684 [2024-09-28 10:36:26.322992] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:51.684 [2024-09-28 10:36:26.323002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:51.684 [2024-09-28 10:36:26.323009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:51.684 [2024-09-28 10:36:26.323017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:51.684 [2024-09-28 10:36:26.323025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:51.684 [2024-09-28 10:36:26.323033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:51.684 [2024-09-28 10:36:26.323041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.684 [2024-09-28 10:36:26.323061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:51.684 [2024-09-28 10:36:26.323068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:15:51.684 [2024-09-28 10:36:26.323078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-28 10:36:26.323142] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:51.684 [2024-09-28 10:36:26.323161] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:54.211 [2024-09-28 10:36:28.421510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.421569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:54.211 [2024-09-28 10:36:28.421584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2098.362 ms 00:15:54.211 [2024-09-28 10:36:28.421595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.441545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.441608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:54.211 [2024-09-28 10:36:28.441624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.860 ms 00:15:54.211 [2024-09-28 10:36:28.441639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.441813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.441846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:54.211 [2024-09-28 10:36:28.441857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:15:54.211 [2024-09-28 10:36:28.441870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.450878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.450919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:54.211 [2024-09-28 10:36:28.450930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.969 ms 00:15:54.211 [2024-09-28 10:36:28.450940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.451014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.451038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:54.211 [2024-09-28 10:36:28.451048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:54.211 [2024-09-28 10:36:28.451061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.451381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.451401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:54.211 [2024-09-28 10:36:28.451410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:15:54.211 [2024-09-28 10:36:28.451422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.451593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.451605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:54.211 [2024-09-28 10:36:28.451615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:15:54.211 [2024-09-28 10:36:28.451626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.457348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.457475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:54.211 [2024-09-28 10:36:28.457534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.686 ms 00:15:54.211 [2024-09-28 10:36:28.457562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.466060] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:54.211 [2024-09-28 10:36:28.480068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.480178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:54.211 [2024-09-28 10:36:28.480228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.332 ms 00:15:54.211 [2024-09-28 10:36:28.480251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.530655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.530810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:54.211 [2024-09-28 10:36:28.530873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.305 ms 00:15:54.211 [2024-09-28 10:36:28.530917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.531136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.531176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:54.211 [2024-09-28 10:36:28.531228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:15:54.211 [2024-09-28 10:36:28.531303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.534545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.534650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:54.211 [2024-09-28 10:36:28.534705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:15:54.211 [2024-09-28 10:36:28.534728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.537463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.537562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:54.211 [2024-09-28 10:36:28.537580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:15:54.211 [2024-09-28 10:36:28.537587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.537881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.537898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:54.211 [2024-09-28 10:36:28.537911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:15:54.211 [2024-09-28 10:36:28.537919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.562794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.562910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:54.211 [2024-09-28 10:36:28.562997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.847 ms 00:15:54.211 [2024-09-28 10:36:28.563023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.566696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.566800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:54.211 [2024-09-28 10:36:28.566862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.591 ms 00:15:54.211 [2024-09-28 10:36:28.566889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.570259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.570289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:54.211 [2024-09-28 10:36:28.570300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:15:54.211 [2024-09-28 10:36:28.570318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.573600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.573709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:54.211 [2024-09-28 10:36:28.573729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:15:54.211 [2024-09-28 10:36:28.573736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.573801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.573811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:54.211 [2024-09-28 10:36:28.573832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:54.211 [2024-09-28 10:36:28.573839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.573911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.211 [2024-09-28 10:36:28.573920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:54.211 [2024-09-28 10:36:28.573929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:54.211 [2024-09-28 10:36:28.573936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.211 [2024-09-28 10:36:28.574742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:54.211 [2024-09-28 10:36:28.575769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2263.727 ms, result 0 00:15:54.211 [2024-09-28 10:36:28.576463] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:54.211 { 00:15:54.211 "name": "ftl0", 00:15:54.211 "uuid": "1563e554-b486-4792-9f1b-1b87095bfb08" 00:15:54.211 } 00:15:54.211 10:36:28 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:15:54.211 10:36:28 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:54.211 10:36:28 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.211 10:36:28 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:15:54.211 10:36:28 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.211 10:36:28 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.211 10:36:28 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:54.212 10:36:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:54.470 [ 00:15:54.470 { 00:15:54.470 "name": "ftl0", 00:15:54.470 "aliases": [ 00:15:54.470 "1563e554-b486-4792-9f1b-1b87095bfb08" 00:15:54.470 ], 00:15:54.470 "product_name": "FTL disk", 00:15:54.470 "block_size": 4096, 00:15:54.470 "num_blocks": 23592960, 00:15:54.470 "uuid": "1563e554-b486-4792-9f1b-1b87095bfb08", 00:15:54.470 "assigned_rate_limits": { 00:15:54.470 "rw_ios_per_sec": 0, 00:15:54.470 "rw_mbytes_per_sec": 0, 00:15:54.470 "r_mbytes_per_sec": 0, 00:15:54.470 "w_mbytes_per_sec": 0 00:15:54.470 }, 00:15:54.470 "claimed": false, 00:15:54.470 "zoned": false, 00:15:54.470 "supported_io_types": { 00:15:54.470 "read": true, 00:15:54.470 "write": true, 00:15:54.470 "unmap": true, 00:15:54.470 "flush": true, 00:15:54.470 "reset": false, 00:15:54.470 "nvme_admin": false, 00:15:54.470 "nvme_io": false, 00:15:54.470 "nvme_io_md": false, 00:15:54.470 "write_zeroes": true, 00:15:54.470 "zcopy": false, 00:15:54.470 "get_zone_info": false, 00:15:54.470 "zone_management": false, 00:15:54.470 "zone_append": false, 00:15:54.470 "compare": false, 00:15:54.470 "compare_and_write": false, 00:15:54.470 "abort": false, 00:15:54.470 "seek_hole": false, 00:15:54.470 "seek_data": false, 00:15:54.470 "copy": false, 00:15:54.470 "nvme_iov_md": false 00:15:54.470 }, 00:15:54.470 "driver_specific": { 00:15:54.470 "ftl": { 00:15:54.470 "base_bdev": "aaaa1daa-5fd8-430e-85e6-fd2a43c95027", 00:15:54.470 "cache": "nvc0n1p0" 00:15:54.470 } 00:15:54.470 } 00:15:54.470 } 00:15:54.470 ] 00:15:54.470 10:36:28 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:15:54.470 10:36:28 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:15:54.470 10:36:28 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:54.470 10:36:29 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:15:54.470 10:36:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:15:54.732 10:36:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:15:54.732 { 00:15:54.732 "name": "ftl0", 00:15:54.732 "aliases": [ 00:15:54.732 "1563e554-b486-4792-9f1b-1b87095bfb08" 00:15:54.732 ], 00:15:54.732 "product_name": "FTL disk", 00:15:54.732 "block_size": 4096, 00:15:54.732 "num_blocks": 23592960, 00:15:54.732 "uuid": "1563e554-b486-4792-9f1b-1b87095bfb08", 00:15:54.732 "assigned_rate_limits": { 00:15:54.732 "rw_ios_per_sec": 0, 00:15:54.732 "rw_mbytes_per_sec": 0, 00:15:54.732 "r_mbytes_per_sec": 0, 00:15:54.732 "w_mbytes_per_sec": 0 00:15:54.732 }, 00:15:54.732 "claimed": false, 00:15:54.732 "zoned": false, 00:15:54.732 "supported_io_types": { 00:15:54.732 "read": true, 00:15:54.732 "write": true, 00:15:54.732 "unmap": true, 00:15:54.732 "flush": true, 00:15:54.732 "reset": false, 00:15:54.732 "nvme_admin": false, 00:15:54.732 "nvme_io": false, 00:15:54.732 "nvme_io_md": false, 00:15:54.732 "write_zeroes": true, 00:15:54.732 "zcopy": false, 00:15:54.732 "get_zone_info": false, 00:15:54.732 "zone_management": false, 00:15:54.732 "zone_append": false, 00:15:54.732 "compare": false, 00:15:54.733 "compare_and_write": false, 00:15:54.733 "abort": false, 00:15:54.733 "seek_hole": false, 00:15:54.733 "seek_data": false, 00:15:54.733 "copy": false, 00:15:54.733 "nvme_iov_md": false 00:15:54.733 }, 00:15:54.733 "driver_specific": { 00:15:54.733 "ftl": { 00:15:54.733 "base_bdev": "aaaa1daa-5fd8-430e-85e6-fd2a43c95027", 00:15:54.733 "cache": "nvc0n1p0" 00:15:54.733 } 00:15:54.733 } 00:15:54.733 } 00:15:54.733 ]' 00:15:54.733 10:36:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:15:54.733 10:36:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:15:54.733 10:36:29 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:54.994 [2024-09-28 10:36:29.612598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.994 [2024-09-28 10:36:29.612775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:54.994 [2024-09-28 10:36:29.612794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:54.994 [2024-09-28 10:36:29.612806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.994 [2024-09-28 10:36:29.612838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:54.994 [2024-09-28 10:36:29.613300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.994 [2024-09-28 10:36:29.613317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:54.994 [2024-09-28 10:36:29.613328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:15:54.994 [2024-09-28 10:36:29.613336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.994 [2024-09-28 10:36:29.613818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.994 [2024-09-28 10:36:29.613828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:54.994 [2024-09-28 10:36:29.613843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:15:54.994 [2024-09-28 10:36:29.613852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.994 [2024-09-28 10:36:29.617507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.994 [2024-09-28 10:36:29.617529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:54.994 [2024-09-28 10:36:29.617540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:15:54.995 [2024-09-28 10:36:29.617548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.995 [2024-09-28 10:36:29.624519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.995 [2024-09-28 10:36:29.624546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:54.995 [2024-09-28 10:36:29.624560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.930 ms 00:15:54.995 [2024-09-28 10:36:29.624569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.995 [2024-09-28 10:36:29.625854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.995 [2024-09-28 10:36:29.625888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:54.995 [2024-09-28 10:36:29.625900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:15:54.995 [2024-09-28 10:36:29.625906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.995 [2024-09-28 10:36:29.629948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.995 [2024-09-28 10:36:29.629993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:54.995 [2024-09-28 10:36:29.630005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.998 ms 00:15:54.995 [2024-09-28 10:36:29.630013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.995 [2024-09-28 10:36:29.630175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.995 [2024-09-28 10:36:29.630183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:54.995 [2024-09-28 10:36:29.630195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:15:54.995 [2024-09-28 10:36:29.630203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.995 [2024-09-28 10:36:29.631945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.995 [2024-09-28 10:36:29.632072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:54.995 [2024-09-28 10:36:29.632094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:15:54.995 [2024-09-28 10:36:29.632101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.995 [2024-09-28 10:36:29.633492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.995 [2024-09-28 10:36:29.633522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:54.995 [2024-09-28 10:36:29.633533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:15:54.995 [2024-09-28 10:36:29.633540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.995 [2024-09-28 10:36:29.634547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.995 [2024-09-28 10:36:29.634578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:54.995 [2024-09-28 10:36:29.634588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:15:54.995 [2024-09-28 10:36:29.634595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.995 [2024-09-28 10:36:29.635558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.995 [2024-09-28 10:36:29.635659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:54.995 [2024-09-28 10:36:29.635675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:15:54.995 [2024-09-28 10:36:29.635682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.995 [2024-09-28 10:36:29.635723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:54.995 [2024-09-28 10:36:29.635736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.635997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:54.995 [2024-09-28 10:36:29.636265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:54.996 [2024-09-28 10:36:29.636589] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:54.996 [2024-09-28 10:36:29.636598] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1563e554-b486-4792-9f1b-1b87095bfb08 00:15:54.996 [2024-09-28 10:36:29.636605] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:54.996 [2024-09-28 10:36:29.636614] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:54.996 [2024-09-28 10:36:29.636621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:54.996 [2024-09-28 10:36:29.636630] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:54.996 [2024-09-28 10:36:29.636638] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:54.996 [2024-09-28 10:36:29.636647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:54.996 [2024-09-28 10:36:29.636654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:54.996 [2024-09-28 10:36:29.636662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:54.996 [2024-09-28 10:36:29.636668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:54.996 [2024-09-28 10:36:29.636677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.996 [2024-09-28 10:36:29.636684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:54.996 [2024-09-28 10:36:29.636697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:15:54.996 [2024-09-28 10:36:29.636704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.638156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.996 [2024-09-28 10:36:29.638177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:54.996 [2024-09-28 10:36:29.638189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:15:54.996 [2024-09-28 10:36:29.638197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.638291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.996 [2024-09-28 10:36:29.638300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:54.996 [2024-09-28 10:36:29.638310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:15:54.996 [2024-09-28 10:36:29.638317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.643523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.643630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:54.996 [2024-09-28 10:36:29.643690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.643714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.643803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.643833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:54.996 [2024-09-28 10:36:29.643885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.643907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.643983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.644014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:54.996 [2024-09-28 10:36:29.644036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.644085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.644129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.644188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:54.996 [2024-09-28 10:36:29.644213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.644253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.653284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.653431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:54.996 [2024-09-28 10:36:29.653489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.653511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.661037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.661184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:54.996 [2024-09-28 10:36:29.661253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.661264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.661318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.661327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:54.996 [2024-09-28 10:36:29.661337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.661344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.661393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.661401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:54.996 [2024-09-28 10:36:29.661410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.661417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.661500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.661509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:54.996 [2024-09-28 10:36:29.661519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.661526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.661577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.661585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:54.996 [2024-09-28 10:36:29.661607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.661614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.996 [2024-09-28 10:36:29.661655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.996 [2024-09-28 10:36:29.661663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:54.996 [2024-09-28 10:36:29.661672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.996 [2024-09-28 10:36:29.661679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.997 [2024-09-28 10:36:29.661733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.997 [2024-09-28 10:36:29.661742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:54.997 [2024-09-28 10:36:29.661751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.997 [2024-09-28 10:36:29.661758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.997 [2024-09-28 10:36:29.661924] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.315 ms, result 0 00:15:54.997 true 00:15:54.997 10:36:29 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 86245 00:15:54.997 10:36:29 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86245 ']' 00:15:54.997 10:36:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86245 00:15:54.997 10:36:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:15:54.997 10:36:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.997 10:36:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86245 00:15:54.997 10:36:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.997 10:36:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.997 10:36:29 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86245' 00:15:54.997 killing process with pid 86245 00:15:54.997 10:36:29 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86245 00:15:54.997 10:36:29 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86245 00:16:00.261 10:36:34 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:00.521 65536+0 records in 00:16:00.521 65536+0 records out 00:16:00.521 268435456 bytes (268 MB, 256 MiB) copied, 0.802264 s, 335 MB/s 00:16:00.521 10:36:35 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:00.783 [2024-09-28 10:36:35.344174] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:00.783 [2024-09-28 10:36:35.344428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86395 ] 00:16:00.783 [2024-09-28 10:36:35.472535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:00.783 [2024-09-28 10:36:35.490244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.783 [2024-09-28 10:36:35.535141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.046 [2024-09-28 10:36:35.626049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:01.046 [2024-09-28 10:36:35.626116] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:01.046 [2024-09-28 10:36:35.784025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.046 [2024-09-28 10:36:35.784290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:01.046 [2024-09-28 10:36:35.784318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:01.046 [2024-09-28 10:36:35.784328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.046 [2024-09-28 10:36:35.786877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.046 [2024-09-28 10:36:35.786940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:01.046 [2024-09-28 10:36:35.786956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.520 ms 00:16:01.046 [2024-09-28 10:36:35.786990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.046 [2024-09-28 10:36:35.787106] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:01.046 [2024-09-28 10:36:35.787376] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:01.046 [2024-09-28 10:36:35.787395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.046 [2024-09-28 10:36:35.787407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:01.046 [2024-09-28 10:36:35.787430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:16:01.046 [2024-09-28 10:36:35.787439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.046 [2024-09-28 10:36:35.789253] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:01.046 [2024-09-28 10:36:35.793347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.046 [2024-09-28 10:36:35.793407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:01.046 [2024-09-28 10:36:35.793422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.095 ms 00:16:01.046 [2024-09-28 10:36:35.793430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.046 [2024-09-28 10:36:35.793519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.046 [2024-09-28 10:36:35.793535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:01.046 [2024-09-28 10:36:35.793545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:01.046 [2024-09-28 10:36:35.793553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.046 [2024-09-28 10:36:35.802138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.046 [2024-09-28 10:36:35.802185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:01.046 [2024-09-28 10:36:35.802201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.540 ms 00:16:01.046 [2024-09-28 10:36:35.802208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.046 [2024-09-28 10:36:35.802340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.046 [2024-09-28 10:36:35.802352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:01.046 [2024-09-28 10:36:35.802362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:01.046 [2024-09-28 10:36:35.802370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.046 [2024-09-28 10:36:35.802397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.046 [2024-09-28 10:36:35.802412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:01.046 [2024-09-28 10:36:35.802421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:01.046 [2024-09-28 10:36:35.802428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.046 [2024-09-28 10:36:35.802455] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:01.046 [2024-09-28 10:36:35.804645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.046 [2024-09-28 10:36:35.804687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:01.046 [2024-09-28 10:36:35.804698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.200 ms 00:16:01.046 [2024-09-28 10:36:35.804705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.046 [2024-09-28 10:36:35.804758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.046 [2024-09-28 10:36:35.804770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:01.046 [2024-09-28 10:36:35.804779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:01.046 [2024-09-28 10:36:35.804786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.046 [2024-09-28 10:36:35.804812] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:01.046 [2024-09-28 10:36:35.804837] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:01.046 [2024-09-28 10:36:35.804875] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:01.046 [2024-09-28 10:36:35.804896] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:01.046 [2024-09-28 10:36:35.805027] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:01.046 [2024-09-28 10:36:35.805040] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:01.046 [2024-09-28 10:36:35.805051] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:01.046 [2024-09-28 10:36:35.805062] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:01.046 [2024-09-28 10:36:35.805071] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:01.046 [2024-09-28 10:36:35.805079] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:01.046 [2024-09-28 10:36:35.805087] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:01.046 [2024-09-28 10:36:35.805095] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:01.046 [2024-09-28 10:36:35.805102] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:01.046 [2024-09-28 10:36:35.805113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.047 [2024-09-28 10:36:35.805125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:01.047 [2024-09-28 10:36:35.805133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:16:01.047 [2024-09-28 10:36:35.805141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.047 [2024-09-28 10:36:35.805231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.047 [2024-09-28 10:36:35.805240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:01.047 [2024-09-28 10:36:35.805256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:01.047 [2024-09-28 10:36:35.805264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.047 [2024-09-28 10:36:35.805363] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:01.047 [2024-09-28 10:36:35.805375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:01.047 [2024-09-28 10:36:35.805388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.047 [2024-09-28 10:36:35.805397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:01.047 [2024-09-28 10:36:35.805414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:01.047 [2024-09-28 10:36:35.805440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:01.047 [2024-09-28 10:36:35.805449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.047 [2024-09-28 10:36:35.805464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:01.047 [2024-09-28 10:36:35.805472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:01.047 [2024-09-28 10:36:35.805480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.047 [2024-09-28 10:36:35.805487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:01.047 [2024-09-28 10:36:35.805495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:01.047 [2024-09-28 10:36:35.805503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:01.047 [2024-09-28 10:36:35.805520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:01.047 [2024-09-28 10:36:35.805530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:01.047 [2024-09-28 10:36:35.805547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.047 [2024-09-28 10:36:35.805562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:01.047 [2024-09-28 10:36:35.805577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.047 [2024-09-28 10:36:35.805594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:01.047 [2024-09-28 10:36:35.805602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.047 [2024-09-28 10:36:35.805617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:01.047 [2024-09-28 10:36:35.805625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.047 [2024-09-28 10:36:35.805641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:01.047 [2024-09-28 10:36:35.805649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.047 [2024-09-28 10:36:35.805664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:01.047 [2024-09-28 10:36:35.805673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:01.047 [2024-09-28 10:36:35.805681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.047 [2024-09-28 10:36:35.805688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:01.047 [2024-09-28 10:36:35.805696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:01.047 [2024-09-28 10:36:35.805706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:01.047 [2024-09-28 10:36:35.805720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:01.047 [2024-09-28 10:36:35.805726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805733] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:01.047 [2024-09-28 10:36:35.805741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:01.047 [2024-09-28 10:36:35.805748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.047 [2024-09-28 10:36:35.805756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.047 [2024-09-28 10:36:35.805764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:01.047 [2024-09-28 10:36:35.805771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:01.047 [2024-09-28 10:36:35.805778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:01.047 [2024-09-28 10:36:35.805787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:01.047 [2024-09-28 10:36:35.805795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:01.047 [2024-09-28 10:36:35.805802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:01.047 [2024-09-28 10:36:35.805811] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:01.047 [2024-09-28 10:36:35.805824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.047 [2024-09-28 10:36:35.805834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:01.047 [2024-09-28 10:36:35.805843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:01.047 [2024-09-28 10:36:35.805850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:01.047 [2024-09-28 10:36:35.805858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:01.047 [2024-09-28 10:36:35.805866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:01.047 [2024-09-28 10:36:35.805873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:01.047 [2024-09-28 10:36:35.805881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:01.047 [2024-09-28 10:36:35.805890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:01.047 [2024-09-28 10:36:35.805897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:01.047 [2024-09-28 10:36:35.805905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:01.047 [2024-09-28 10:36:35.805912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:01.047 [2024-09-28 10:36:35.805919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:01.047 [2024-09-28 10:36:35.805927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:01.047 [2024-09-28 10:36:35.805934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:01.047 [2024-09-28 10:36:35.805941] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:01.047 [2024-09-28 10:36:35.805951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.047 [2024-09-28 10:36:35.805975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:01.047 [2024-09-28 10:36:35.805984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:01.047 [2024-09-28 10:36:35.805991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:01.047 [2024-09-28 10:36:35.805999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:01.047 [2024-09-28 10:36:35.806007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.047 [2024-09-28 10:36:35.806017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:01.047 [2024-09-28 10:36:35.806028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:16:01.048 [2024-09-28 10:36:35.806036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.827753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.827811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:01.309 [2024-09-28 10:36:35.827834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.660 ms 00:16:01.309 [2024-09-28 10:36:35.827843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.828019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.828032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:01.309 [2024-09-28 10:36:35.828047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:01.309 [2024-09-28 10:36:35.828055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.840591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.840639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:01.309 [2024-09-28 10:36:35.840656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.512 ms 00:16:01.309 [2024-09-28 10:36:35.840664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.840739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.840752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:01.309 [2024-09-28 10:36:35.840762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:01.309 [2024-09-28 10:36:35.840770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.841309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.841341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:01.309 [2024-09-28 10:36:35.841352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:16:01.309 [2024-09-28 10:36:35.841361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.841523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.841541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:01.309 [2024-09-28 10:36:35.841553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:16:01.309 [2024-09-28 10:36:35.841563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.848713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.848759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:01.309 [2024-09-28 10:36:35.848770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.123 ms 00:16:01.309 [2024-09-28 10:36:35.848783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.852164] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:01.309 [2024-09-28 10:36:35.852221] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:01.309 [2024-09-28 10:36:35.852233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.852242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:01.309 [2024-09-28 10:36:35.852251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:16:01.309 [2024-09-28 10:36:35.852258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.868007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.868056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:01.309 [2024-09-28 10:36:35.868068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.682 ms 00:16:01.309 [2024-09-28 10:36:35.868085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.871199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.871249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:01.309 [2024-09-28 10:36:35.871259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.018 ms 00:16:01.309 [2024-09-28 10:36:35.871266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.874330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.874533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:01.309 [2024-09-28 10:36:35.874553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.003 ms 00:16:01.309 [2024-09-28 10:36:35.874561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.874925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.874939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:01.309 [2024-09-28 10:36:35.874950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:16:01.309 [2024-09-28 10:36:35.874990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.898987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.309 [2024-09-28 10:36:35.899048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:01.309 [2024-09-28 10:36:35.899061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.969 ms 00:16:01.309 [2024-09-28 10:36:35.899070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.309 [2024-09-28 10:36:35.907396] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:01.309 [2024-09-28 10:36:35.926652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-09-28 10:36:35.926705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:01.310 [2024-09-28 10:36:35.926719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.485 ms 00:16:01.310 [2024-09-28 10:36:35.926727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-09-28 10:36:35.926830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-09-28 10:36:35.926840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:01.310 [2024-09-28 10:36:35.926851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:01.310 [2024-09-28 10:36:35.926860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-09-28 10:36:35.926921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-09-28 10:36:35.926935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:01.310 [2024-09-28 10:36:35.926947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:01.310 [2024-09-28 10:36:35.926992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-09-28 10:36:35.927021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-09-28 10:36:35.927030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:01.310 [2024-09-28 10:36:35.927043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:01.310 [2024-09-28 10:36:35.927052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-09-28 10:36:35.927086] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:01.310 [2024-09-28 10:36:35.927096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-09-28 10:36:35.927110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:01.310 [2024-09-28 10:36:35.927119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:01.310 [2024-09-28 10:36:35.927127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-09-28 10:36:35.933345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-09-28 10:36:35.933518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:01.310 [2024-09-28 10:36:35.933537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.193 ms 00:16:01.310 [2024-09-28 10:36:35.933546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-09-28 10:36:35.933636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-09-28 10:36:35.933652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:01.310 [2024-09-28 10:36:35.933664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:01.310 [2024-09-28 10:36:35.933672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-09-28 10:36:35.934668] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:01.310 [2024-09-28 10:36:35.936061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 150.376 ms, result 0 00:16:01.310 [2024-09-28 10:36:35.937371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:01.310 [2024-09-28 10:36:35.944700] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:11.829  Copying: 20/256 [MB] (20 MBps) Copying: 62/256 [MB] (41 MBps) Copying: 102/256 [MB] (40 MBps) Copying: 124/256 [MB] (21 MBps) Copying: 143/256 [MB] (19 MBps) Copying: 163/256 [MB] (19 MBps) Copying: 188/256 [MB] (25 MBps) Copying: 208/256 [MB] (20 MBps) Copying: 226/256 [MB] (17 MBps) Copying: 245/256 [MB] (19 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-09-28 10:36:46.511497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:11.830 [2024-09-28 10:36:46.512690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.512842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:11.830 [2024-09-28 10:36:46.512860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:11.830 [2024-09-28 10:36:46.512873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.512897] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:11.830 [2024-09-28 10:36:46.513340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.513364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:11.830 [2024-09-28 10:36:46.513372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:16:11.830 [2024-09-28 10:36:46.513380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.514892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.514927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:11.830 [2024-09-28 10:36:46.514936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:16:11.830 [2024-09-28 10:36:46.514944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.521029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.521065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:11.830 [2024-09-28 10:36:46.521074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.068 ms 00:16:11.830 [2024-09-28 10:36:46.521082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.528069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.528099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:11.830 [2024-09-28 10:36:46.528108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.941 ms 00:16:11.830 [2024-09-28 10:36:46.528115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.529370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.529402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:11.830 [2024-09-28 10:36:46.529410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:16:11.830 [2024-09-28 10:36:46.529418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.532936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.532989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:11.830 [2024-09-28 10:36:46.532998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.486 ms 00:16:11.830 [2024-09-28 10:36:46.533010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.533131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.533146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:11.830 [2024-09-28 10:36:46.533154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:11.830 [2024-09-28 10:36:46.533161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.534906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.534939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:11.830 [2024-09-28 10:36:46.534948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:16:11.830 [2024-09-28 10:36:46.534955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.536420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.536450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:11.830 [2024-09-28 10:36:46.536458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:16:11.830 [2024-09-28 10:36:46.536465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.537570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.537602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:11.830 [2024-09-28 10:36:46.537610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:16:11.830 [2024-09-28 10:36:46.537616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.538607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.830 [2024-09-28 10:36:46.538729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:11.830 [2024-09-28 10:36:46.538743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:16:11.830 [2024-09-28 10:36:46.538750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.830 [2024-09-28 10:36:46.538778] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:11.830 [2024-09-28 10:36:46.538801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.538996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:11.830 [2024-09-28 10:36:46.539099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:11.831 [2024-09-28 10:36:46.539563] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:11.831 [2024-09-28 10:36:46.539570] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1563e554-b486-4792-9f1b-1b87095bfb08 00:16:11.831 [2024-09-28 10:36:46.539577] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:11.831 [2024-09-28 10:36:46.539584] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:11.831 [2024-09-28 10:36:46.539591] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:11.831 [2024-09-28 10:36:46.539598] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:11.831 [2024-09-28 10:36:46.539604] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:11.831 [2024-09-28 10:36:46.539611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:11.831 [2024-09-28 10:36:46.539624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:11.831 [2024-09-28 10:36:46.539630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:11.831 [2024-09-28 10:36:46.539636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:11.831 [2024-09-28 10:36:46.539643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.831 [2024-09-28 10:36:46.539650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:11.831 [2024-09-28 10:36:46.539659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:16:11.831 [2024-09-28 10:36:46.539666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.831 [2024-09-28 10:36:46.541059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.831 [2024-09-28 10:36:46.541079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:11.831 [2024-09-28 10:36:46.541088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:16:11.831 [2024-09-28 10:36:46.541095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.831 [2024-09-28 10:36:46.541189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.831 [2024-09-28 10:36:46.541200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:11.831 [2024-09-28 10:36:46.541208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:11.831 [2024-09-28 10:36:46.541215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.831 [2024-09-28 10:36:46.545950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.831 [2024-09-28 10:36:46.546075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:11.831 [2024-09-28 10:36:46.546137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.831 [2024-09-28 10:36:46.546159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.831 [2024-09-28 10:36:46.546224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.831 [2024-09-28 10:36:46.546255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:11.831 [2024-09-28 10:36:46.546274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.831 [2024-09-28 10:36:46.546297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.831 [2024-09-28 10:36:46.546348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.831 [2024-09-28 10:36:46.546420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:11.831 [2024-09-28 10:36:46.546444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.831 [2024-09-28 10:36:46.546462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.831 [2024-09-28 10:36:46.546491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.832 [2024-09-28 10:36:46.546515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:11.832 [2024-09-28 10:36:46.546543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.832 [2024-09-28 10:36:46.546561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.832 [2024-09-28 10:36:46.555070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.832 [2024-09-28 10:36:46.555203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:11.832 [2024-09-28 10:36:46.555252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.832 [2024-09-28 10:36:46.555274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.832 [2024-09-28 10:36:46.562202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.832 [2024-09-28 10:36:46.562321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:11.832 [2024-09-28 10:36:46.562367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.832 [2024-09-28 10:36:46.562388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.832 [2024-09-28 10:36:46.562427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.832 [2024-09-28 10:36:46.562448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:11.832 [2024-09-28 10:36:46.562466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.832 [2024-09-28 10:36:46.562484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.832 [2024-09-28 10:36:46.562529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.832 [2024-09-28 10:36:46.562565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:11.832 [2024-09-28 10:36:46.562584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.832 [2024-09-28 10:36:46.562606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.832 [2024-09-28 10:36:46.562693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.832 [2024-09-28 10:36:46.562720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:11.832 [2024-09-28 10:36:46.562740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.832 [2024-09-28 10:36:46.562805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.832 [2024-09-28 10:36:46.562858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.832 [2024-09-28 10:36:46.562891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:11.832 [2024-09-28 10:36:46.562917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.832 [2024-09-28 10:36:46.562925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.832 [2024-09-28 10:36:46.562980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.832 [2024-09-28 10:36:46.562989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:11.832 [2024-09-28 10:36:46.562997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.832 [2024-09-28 10:36:46.563008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.832 [2024-09-28 10:36:46.563049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.832 [2024-09-28 10:36:46.563058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:11.832 [2024-09-28 10:36:46.563065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.832 [2024-09-28 10:36:46.563075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.832 [2024-09-28 10:36:46.563201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.491 ms, result 0 00:16:12.405 00:16:12.405 00:16:12.405 10:36:47 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=86526 00:16:12.405 10:36:47 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 86526 00:16:12.405 10:36:47 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86526 ']' 00:16:12.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.405 10:36:47 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.405 10:36:47 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.405 10:36:47 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.405 10:36:47 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.405 10:36:47 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:12.405 10:36:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:12.405 [2024-09-28 10:36:47.146368] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:12.405 [2024-09-28 10:36:47.147296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86526 ] 00:16:12.667 [2024-09-28 10:36:47.287174] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:12.667 [2024-09-28 10:36:47.305045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.667 [2024-09-28 10:36:47.354743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.240 10:36:47 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.240 10:36:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:13.240 10:36:47 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:13.502 [2024-09-28 10:36:48.211647] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:13.502 [2024-09-28 10:36:48.211731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:13.766 [2024-09-28 10:36:48.368921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.369022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:13.766 [2024-09-28 10:36:48.369043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:13.766 [2024-09-28 10:36:48.369052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.371727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.371786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:13.766 [2024-09-28 10:36:48.371799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:16:13.766 [2024-09-28 10:36:48.371807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.371919] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:13.766 [2024-09-28 10:36:48.372216] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:13.766 [2024-09-28 10:36:48.372238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.372249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:13.766 [2024-09-28 10:36:48.372261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:16:13.766 [2024-09-28 10:36:48.372270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.374135] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:13.766 [2024-09-28 10:36:48.378056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.378114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:13.766 [2024-09-28 10:36:48.378126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.930 ms 00:16:13.766 [2024-09-28 10:36:48.378143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.378228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.378244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:13.766 [2024-09-28 10:36:48.378253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:13.766 [2024-09-28 10:36:48.378263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.386858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.386907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:13.766 [2024-09-28 10:36:48.386919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.538 ms 00:16:13.766 [2024-09-28 10:36:48.386929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.387112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.387128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:13.766 [2024-09-28 10:36:48.387138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:13.766 [2024-09-28 10:36:48.387155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.387184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.387194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:13.766 [2024-09-28 10:36:48.387202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:13.766 [2024-09-28 10:36:48.387212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.387236] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:13.766 [2024-09-28 10:36:48.389400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.389599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:13.766 [2024-09-28 10:36:48.389627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.163 ms 00:16:13.766 [2024-09-28 10:36:48.389635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.389688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.389696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:13.766 [2024-09-28 10:36:48.389707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:13.766 [2024-09-28 10:36:48.389714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.389737] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:13.766 [2024-09-28 10:36:48.389757] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:13.766 [2024-09-28 10:36:48.389806] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:13.766 [2024-09-28 10:36:48.389824] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:13.766 [2024-09-28 10:36:48.389931] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:13.766 [2024-09-28 10:36:48.389941] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:13.766 [2024-09-28 10:36:48.389954] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:13.766 [2024-09-28 10:36:48.389995] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:13.766 [2024-09-28 10:36:48.390009] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:13.766 [2024-09-28 10:36:48.390017] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:13.766 [2024-09-28 10:36:48.390027] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:13.766 [2024-09-28 10:36:48.390035] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:13.766 [2024-09-28 10:36:48.390047] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:13.766 [2024-09-28 10:36:48.390055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.390064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:13.766 [2024-09-28 10:36:48.390072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:16:13.766 [2024-09-28 10:36:48.390081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.390170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.766 [2024-09-28 10:36:48.390182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:13.766 [2024-09-28 10:36:48.390191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:13.766 [2024-09-28 10:36:48.390201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.766 [2024-09-28 10:36:48.390306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:13.766 [2024-09-28 10:36:48.390320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:13.766 [2024-09-28 10:36:48.390329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:13.766 [2024-09-28 10:36:48.390341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:13.766 [2024-09-28 10:36:48.390351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:13.766 [2024-09-28 10:36:48.390361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:13.766 [2024-09-28 10:36:48.390369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:13.766 [2024-09-28 10:36:48.390380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:13.766 [2024-09-28 10:36:48.390388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:13.766 [2024-09-28 10:36:48.390397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:13.766 [2024-09-28 10:36:48.390406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:13.766 [2024-09-28 10:36:48.390416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:13.766 [2024-09-28 10:36:48.390424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:13.766 [2024-09-28 10:36:48.390440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:13.766 [2024-09-28 10:36:48.390451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:13.766 [2024-09-28 10:36:48.390461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:13.767 [2024-09-28 10:36:48.390469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:13.767 [2024-09-28 10:36:48.390478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:13.767 [2024-09-28 10:36:48.390485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:13.767 [2024-09-28 10:36:48.390497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:13.767 [2024-09-28 10:36:48.390503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:13.767 [2024-09-28 10:36:48.390512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:13.767 [2024-09-28 10:36:48.390519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:13.767 [2024-09-28 10:36:48.390527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:13.767 [2024-09-28 10:36:48.390533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:13.767 [2024-09-28 10:36:48.390541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:13.767 [2024-09-28 10:36:48.390548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:13.767 [2024-09-28 10:36:48.390557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:13.767 [2024-09-28 10:36:48.390564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:13.767 [2024-09-28 10:36:48.390572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:13.767 [2024-09-28 10:36:48.390578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:13.767 [2024-09-28 10:36:48.390586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:13.767 [2024-09-28 10:36:48.390593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:13.767 [2024-09-28 10:36:48.390602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:13.767 [2024-09-28 10:36:48.390609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:13.767 [2024-09-28 10:36:48.390619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:13.767 [2024-09-28 10:36:48.390625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:13.767 [2024-09-28 10:36:48.390634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:13.767 [2024-09-28 10:36:48.390641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:13.767 [2024-09-28 10:36:48.390648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:13.767 [2024-09-28 10:36:48.390655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:13.767 [2024-09-28 10:36:48.390663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:13.767 [2024-09-28 10:36:48.390670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:13.767 [2024-09-28 10:36:48.390678] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:13.767 [2024-09-28 10:36:48.390686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:13.767 [2024-09-28 10:36:48.390698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:13.767 [2024-09-28 10:36:48.390707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:13.767 [2024-09-28 10:36:48.390717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:13.767 [2024-09-28 10:36:48.390724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:13.767 [2024-09-28 10:36:48.390733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:13.767 [2024-09-28 10:36:48.390739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:13.767 [2024-09-28 10:36:48.390749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:13.767 [2024-09-28 10:36:48.390757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:13.767 [2024-09-28 10:36:48.390766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:13.767 [2024-09-28 10:36:48.390776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:13.767 [2024-09-28 10:36:48.390786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:13.767 [2024-09-28 10:36:48.390793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:13.767 [2024-09-28 10:36:48.390804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:13.767 [2024-09-28 10:36:48.390811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:13.767 [2024-09-28 10:36:48.390820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:13.767 [2024-09-28 10:36:48.390828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:13.767 [2024-09-28 10:36:48.390838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:13.767 [2024-09-28 10:36:48.390846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:13.767 [2024-09-28 10:36:48.390855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:13.767 [2024-09-28 10:36:48.390862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:13.767 [2024-09-28 10:36:48.390871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:13.767 [2024-09-28 10:36:48.390879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:13.767 [2024-09-28 10:36:48.390890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:13.767 [2024-09-28 10:36:48.390897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:13.767 [2024-09-28 10:36:48.390906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:13.767 [2024-09-28 10:36:48.390917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:13.767 [2024-09-28 10:36:48.390927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:13.767 [2024-09-28 10:36:48.390934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:13.767 [2024-09-28 10:36:48.390943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:13.767 [2024-09-28 10:36:48.390951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:13.767 [2024-09-28 10:36:48.390974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.767 [2024-09-28 10:36:48.390982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:13.767 [2024-09-28 10:36:48.390996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:16:13.767 [2024-09-28 10:36:48.391005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.767 [2024-09-28 10:36:48.405424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.767 [2024-09-28 10:36:48.405473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:13.767 [2024-09-28 10:36:48.405488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.332 ms 00:16:13.767 [2024-09-28 10:36:48.405498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.767 [2024-09-28 10:36:48.405636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.767 [2024-09-28 10:36:48.405649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:13.767 [2024-09-28 10:36:48.405660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:13.767 [2024-09-28 10:36:48.405669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.767 [2024-09-28 10:36:48.417385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.767 [2024-09-28 10:36:48.417430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:13.767 [2024-09-28 10:36:48.417446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.689 ms 00:16:13.767 [2024-09-28 10:36:48.417454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.767 [2024-09-28 10:36:48.417523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.767 [2024-09-28 10:36:48.417532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:13.767 [2024-09-28 10:36:48.417543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:13.767 [2024-09-28 10:36:48.417551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.767 [2024-09-28 10:36:48.418049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.767 [2024-09-28 10:36:48.418070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:13.768 [2024-09-28 10:36:48.418082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:16:13.768 [2024-09-28 10:36:48.418101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.418250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.418262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:13.768 [2024-09-28 10:36:48.418275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:16:13.768 [2024-09-28 10:36:48.418285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.440210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.440423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:13.768 [2024-09-28 10:36:48.440451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.893 ms 00:16:13.768 [2024-09-28 10:36:48.440461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.444428] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:13.768 [2024-09-28 10:36:48.444483] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:13.768 [2024-09-28 10:36:48.444500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.444508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:13.768 [2024-09-28 10:36:48.444520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.876 ms 00:16:13.768 [2024-09-28 10:36:48.444528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.460578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.460628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:13.768 [2024-09-28 10:36:48.460650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.958 ms 00:16:13.768 [2024-09-28 10:36:48.460658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.463791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.464009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:13.768 [2024-09-28 10:36:48.464035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.027 ms 00:16:13.768 [2024-09-28 10:36:48.464044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.467444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.467651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:13.768 [2024-09-28 10:36:48.467679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.047 ms 00:16:13.768 [2024-09-28 10:36:48.467687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.468077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.468095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:13.768 [2024-09-28 10:36:48.468108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:16:13.768 [2024-09-28 10:36:48.468117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.492189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.492255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:13.768 [2024-09-28 10:36:48.492274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.042 ms 00:16:13.768 [2024-09-28 10:36:48.492282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.500366] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:13.768 [2024-09-28 10:36:48.519431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.519489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:13.768 [2024-09-28 10:36:48.519504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.045 ms 00:16:13.768 [2024-09-28 10:36:48.519515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.519606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.519628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:13.768 [2024-09-28 10:36:48.519642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:13.768 [2024-09-28 10:36:48.519652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.519712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.519723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:13.768 [2024-09-28 10:36:48.519731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:13.768 [2024-09-28 10:36:48.519741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.519766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.519783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:13.768 [2024-09-28 10:36:48.519791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:13.768 [2024-09-28 10:36:48.519806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.519843] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:13.768 [2024-09-28 10:36:48.519855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.519863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:13.768 [2024-09-28 10:36:48.519872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:13.768 [2024-09-28 10:36:48.519880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.526328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.526514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:13.768 [2024-09-28 10:36:48.526538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.417 ms 00:16:13.768 [2024-09-28 10:36:48.526549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.526637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.768 [2024-09-28 10:36:48.526647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:13.768 [2024-09-28 10:36:48.526658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:13.768 [2024-09-28 10:36:48.526666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.768 [2024-09-28 10:36:48.527747] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:13.768 [2024-09-28 10:36:48.529202] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.493 ms, result 0 00:16:13.768 [2024-09-28 10:36:48.531716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:13.768 Some configs were skipped because the RPC state that can call them passed over. 00:16:14.030 10:36:48 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:14.030 [2024-09-28 10:36:48.769233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.030 [2024-09-28 10:36:48.769314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:14.030 [2024-09-28 10:36:48.769331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:16:14.030 [2024-09-28 10:36:48.769343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.030 [2024-09-28 10:36:48.769383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.564 ms, result 0 00:16:14.030 true 00:16:14.030 10:36:48 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:14.291 [2024-09-28 10:36:48.976998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.291 [2024-09-28 10:36:48.977066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:14.291 [2024-09-28 10:36:48.977085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:16:14.291 [2024-09-28 10:36:48.977094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.291 [2024-09-28 10:36:48.977136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.095 ms, result 0 00:16:14.291 true 00:16:14.291 10:36:48 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 86526 00:16:14.291 10:36:48 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86526 ']' 00:16:14.291 10:36:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86526 00:16:14.291 10:36:49 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:14.291 10:36:49 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.291 10:36:49 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86526 00:16:14.291 10:36:49 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.291 10:36:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.291 10:36:49 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86526' 00:16:14.291 killing process with pid 86526 00:16:14.291 10:36:49 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86526 00:16:14.291 10:36:49 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86526 00:16:14.553 [2024-09-28 10:36:49.153374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.153592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:14.553 [2024-09-28 10:36:49.153655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:14.553 [2024-09-28 10:36:49.153683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.153726] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:14.553 [2024-09-28 10:36:49.154258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.154305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:14.553 [2024-09-28 10:36:49.154331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:16:14.553 [2024-09-28 10:36:49.154350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.154711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.154777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:14.553 [2024-09-28 10:36:49.154791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:14.553 [2024-09-28 10:36:49.154799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.158849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.158887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:14.553 [2024-09-28 10:36:49.158898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.027 ms 00:16:14.553 [2024-09-28 10:36:49.158907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.166037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.166071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:14.553 [2024-09-28 10:36:49.166085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.090 ms 00:16:14.553 [2024-09-28 10:36:49.166094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.168591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.168630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:14.553 [2024-09-28 10:36:49.168642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:16:14.553 [2024-09-28 10:36:49.168649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.172410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.172556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:14.553 [2024-09-28 10:36:49.172579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.714 ms 00:16:14.553 [2024-09-28 10:36:49.172594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.172727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.172737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:14.553 [2024-09-28 10:36:49.172751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:14.553 [2024-09-28 10:36:49.172761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.175542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.175587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:14.553 [2024-09-28 10:36:49.175610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:16:14.553 [2024-09-28 10:36:49.175621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.177823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.177933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:14.553 [2024-09-28 10:36:49.178003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.149 ms 00:16:14.553 [2024-09-28 10:36:49.178027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.179937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.180130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:14.553 [2024-09-28 10:36:49.180191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:16:14.553 [2024-09-28 10:36:49.180213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.182056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 10:36:49.182170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:14.553 [2024-09-28 10:36:49.182189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.762 ms 00:16:14.553 [2024-09-28 10:36:49.182197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 10:36:49.182232] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:14.553 [2024-09-28 10:36:49.182248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:14.554 [2024-09-28 10:36:49.182993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:14.555 [2024-09-28 10:36:49.183125] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:14.555 [2024-09-28 10:36:49.183134] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1563e554-b486-4792-9f1b-1b87095bfb08 00:16:14.555 [2024-09-28 10:36:49.183142] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:14.555 [2024-09-28 10:36:49.183153] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:14.555 [2024-09-28 10:36:49.183160] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:14.555 [2024-09-28 10:36:49.183169] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:14.555 [2024-09-28 10:36:49.183178] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:14.555 [2024-09-28 10:36:49.183187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:14.555 [2024-09-28 10:36:49.183194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:14.555 [2024-09-28 10:36:49.183202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:14.555 [2024-09-28 10:36:49.183208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:14.555 [2024-09-28 10:36:49.183217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.555 [2024-09-28 10:36:49.183224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:14.555 [2024-09-28 10:36:49.183237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:16:14.555 [2024-09-28 10:36:49.183244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.185137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.555 [2024-09-28 10:36:49.185238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:14.555 [2024-09-28 10:36:49.185290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:16:14.555 [2024-09-28 10:36:49.185314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.185419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.555 [2024-09-28 10:36:49.185445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:14.555 [2024-09-28 10:36:49.185513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:14.555 [2024-09-28 10:36:49.185542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.191406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.191553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.555 [2024-09-28 10:36:49.191621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.191644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.191771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.191798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.555 [2024-09-28 10:36:49.191822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.191876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.191942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.191999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.555 [2024-09-28 10:36:49.192126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.192159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.192199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.192219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.555 [2024-09-28 10:36:49.192240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.192259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.202569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.202728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.555 [2024-09-28 10:36:49.202859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.202889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.210831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.210985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.555 [2024-09-28 10:36:49.211055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.211077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.211141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.211166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.555 [2024-09-28 10:36:49.211187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.211206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.211249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.211269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.555 [2024-09-28 10:36:49.211291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.211349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.211481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.211520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.555 [2024-09-28 10:36:49.211557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.211583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.211634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.211929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:14.555 [2024-09-28 10:36:49.211946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.211954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.212019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.212028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.555 [2024-09-28 10:36:49.212039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.212047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.212096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.555 [2024-09-28 10:36:49.212106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.555 [2024-09-28 10:36:49.212116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.555 [2024-09-28 10:36:49.212123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.555 [2024-09-28 10:36:49.212258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.855 ms, result 0 00:16:14.817 10:36:49 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:14.817 10:36:49 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:14.817 [2024-09-28 10:36:49.481091] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:14.817 [2024-09-28 10:36:49.481216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86562 ] 00:16:15.078 [2024-09-28 10:36:49.612483] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:15.079 [2024-09-28 10:36:49.632503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.079 [2024-09-28 10:36:49.685301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.079 [2024-09-28 10:36:49.801727] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:15.079 [2024-09-28 10:36:49.801817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:15.342 [2024-09-28 10:36:49.962303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.342 [2024-09-28 10:36:49.962362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:15.342 [2024-09-28 10:36:49.962377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:15.342 [2024-09-28 10:36:49.962386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.342 [2024-09-28 10:36:49.964918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.342 [2024-09-28 10:36:49.965116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:15.342 [2024-09-28 10:36:49.965138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.510 ms 00:16:15.342 [2024-09-28 10:36:49.965146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.342 [2024-09-28 10:36:49.965253] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:15.342 [2024-09-28 10:36:49.965520] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:15.342 [2024-09-28 10:36:49.965539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.342 [2024-09-28 10:36:49.965549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:15.342 [2024-09-28 10:36:49.965559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:16:15.342 [2024-09-28 10:36:49.965567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.342 [2024-09-28 10:36:49.967463] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:15.342 [2024-09-28 10:36:49.971233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.342 [2024-09-28 10:36:49.971291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:15.342 [2024-09-28 10:36:49.971306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.773 ms 00:16:15.342 [2024-09-28 10:36:49.971315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.342 [2024-09-28 10:36:49.971423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.342 [2024-09-28 10:36:49.971434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:15.342 [2024-09-28 10:36:49.971444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:15.342 [2024-09-28 10:36:49.971451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.342 [2024-09-28 10:36:49.979651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.342 [2024-09-28 10:36:49.979693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:15.342 [2024-09-28 10:36:49.979705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.149 ms 00:16:15.342 [2024-09-28 10:36:49.979712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.342 [2024-09-28 10:36:49.979842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.342 [2024-09-28 10:36:49.979856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:15.342 [2024-09-28 10:36:49.979869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:15.342 [2024-09-28 10:36:49.979877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.342 [2024-09-28 10:36:49.979905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.342 [2024-09-28 10:36:49.979916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:15.342 [2024-09-28 10:36:49.979925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:15.342 [2024-09-28 10:36:49.979933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.342 [2024-09-28 10:36:49.979955] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:15.342 [2024-09-28 10:36:49.982041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.342 [2024-09-28 10:36:49.982078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:15.342 [2024-09-28 10:36:49.982095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:16:15.342 [2024-09-28 10:36:49.982102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.342 [2024-09-28 10:36:49.982149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.342 [2024-09-28 10:36:49.982160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:15.342 [2024-09-28 10:36:49.982169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:15.342 [2024-09-28 10:36:49.982177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.342 [2024-09-28 10:36:49.982195] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:15.342 [2024-09-28 10:36:49.982220] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:15.342 [2024-09-28 10:36:49.982259] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:15.342 [2024-09-28 10:36:49.982280] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:15.342 [2024-09-28 10:36:49.982387] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:15.342 [2024-09-28 10:36:49.982398] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:15.342 [2024-09-28 10:36:49.982409] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:15.343 [2024-09-28 10:36:49.982420] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:15.343 [2024-09-28 10:36:49.982429] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:15.343 [2024-09-28 10:36:49.982437] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:15.343 [2024-09-28 10:36:49.982445] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:15.343 [2024-09-28 10:36:49.982453] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:15.343 [2024-09-28 10:36:49.982460] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:15.343 [2024-09-28 10:36:49.982471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.343 [2024-09-28 10:36:49.982481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:15.343 [2024-09-28 10:36:49.982490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:16:15.343 [2024-09-28 10:36:49.982497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.343 [2024-09-28 10:36:49.982589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.343 [2024-09-28 10:36:49.982598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:15.343 [2024-09-28 10:36:49.982606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:15.343 [2024-09-28 10:36:49.982614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.343 [2024-09-28 10:36:49.982717] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:15.343 [2024-09-28 10:36:49.982729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:15.343 [2024-09-28 10:36:49.982741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:15.343 [2024-09-28 10:36:49.982750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.343 [2024-09-28 10:36:49.982759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:15.343 [2024-09-28 10:36:49.982767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:15.343 [2024-09-28 10:36:49.982782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:15.343 [2024-09-28 10:36:49.982794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:15.343 [2024-09-28 10:36:49.982802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:15.343 [2024-09-28 10:36:49.982810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:15.343 [2024-09-28 10:36:49.982817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:15.343 [2024-09-28 10:36:49.982825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:15.343 [2024-09-28 10:36:49.982833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:15.343 [2024-09-28 10:36:49.982841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:15.343 [2024-09-28 10:36:49.982848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:15.343 [2024-09-28 10:36:49.982856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.343 [2024-09-28 10:36:49.982863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:15.343 [2024-09-28 10:36:49.982871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:15.343 [2024-09-28 10:36:49.982879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.343 [2024-09-28 10:36:49.982887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:15.343 [2024-09-28 10:36:49.982897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:15.343 [2024-09-28 10:36:49.982905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.343 [2024-09-28 10:36:49.982914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:15.343 [2024-09-28 10:36:49.982929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:15.343 [2024-09-28 10:36:49.982938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.343 [2024-09-28 10:36:49.982947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:15.343 [2024-09-28 10:36:49.982954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:15.343 [2024-09-28 10:36:49.982986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.343 [2024-09-28 10:36:49.982995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:15.343 [2024-09-28 10:36:49.983003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:15.343 [2024-09-28 10:36:49.983011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.343 [2024-09-28 10:36:49.983019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:15.343 [2024-09-28 10:36:49.983027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:15.343 [2024-09-28 10:36:49.983035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:15.343 [2024-09-28 10:36:49.983042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:15.343 [2024-09-28 10:36:49.983050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:15.343 [2024-09-28 10:36:49.983058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:15.343 [2024-09-28 10:36:49.983065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:15.343 [2024-09-28 10:36:49.983073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:15.343 [2024-09-28 10:36:49.983084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.343 [2024-09-28 10:36:49.983092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:15.343 [2024-09-28 10:36:49.983099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:15.343 [2024-09-28 10:36:49.983107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.343 [2024-09-28 10:36:49.983114] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:15.343 [2024-09-28 10:36:49.983123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:15.343 [2024-09-28 10:36:49.983131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:15.343 [2024-09-28 10:36:49.983138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.343 [2024-09-28 10:36:49.983146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:15.343 [2024-09-28 10:36:49.983153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:15.343 [2024-09-28 10:36:49.983160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:15.343 [2024-09-28 10:36:49.983167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:15.343 [2024-09-28 10:36:49.983173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:15.343 [2024-09-28 10:36:49.983181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:15.343 [2024-09-28 10:36:49.983191] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:15.343 [2024-09-28 10:36:49.983200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:15.343 [2024-09-28 10:36:49.983211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:15.343 [2024-09-28 10:36:49.983218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:15.343 [2024-09-28 10:36:49.983225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:15.343 [2024-09-28 10:36:49.983233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:15.343 [2024-09-28 10:36:49.983240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:15.343 [2024-09-28 10:36:49.983247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:15.343 [2024-09-28 10:36:49.983254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:15.343 [2024-09-28 10:36:49.983261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:15.343 [2024-09-28 10:36:49.983268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:15.343 [2024-09-28 10:36:49.983275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:15.343 [2024-09-28 10:36:49.983282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:15.343 [2024-09-28 10:36:49.983290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:15.343 [2024-09-28 10:36:49.983297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:15.343 [2024-09-28 10:36:49.983304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:15.343 [2024-09-28 10:36:49.983311] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:15.343 [2024-09-28 10:36:49.983319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:15.343 [2024-09-28 10:36:49.983333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:15.343 [2024-09-28 10:36:49.983341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:15.343 [2024-09-28 10:36:49.983348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:15.343 [2024-09-28 10:36:49.983355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:15.343 [2024-09-28 10:36:49.983363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.343 [2024-09-28 10:36:49.983373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:15.343 [2024-09-28 10:36:49.983380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:16:15.343 [2024-09-28 10:36:49.983413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.343 [2024-09-28 10:36:50.006876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.343 [2024-09-28 10:36:50.007104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:15.343 [2024-09-28 10:36:50.007189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.404 ms 00:16:15.343 [2024-09-28 10:36:50.007221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.343 [2024-09-28 10:36:50.007433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.343 [2024-09-28 10:36:50.007554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:15.343 [2024-09-28 10:36:50.007594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:15.344 [2024-09-28 10:36:50.007614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.020343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.020502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:15.344 [2024-09-28 10:36:50.020555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.625 ms 00:16:15.344 [2024-09-28 10:36:50.020577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.020674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.020704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:15.344 [2024-09-28 10:36:50.020725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:15.344 [2024-09-28 10:36:50.020744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.021299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.021366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:15.344 [2024-09-28 10:36:50.021388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:16:15.344 [2024-09-28 10:36:50.021517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.021690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.021774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:15.344 [2024-09-28 10:36:50.021802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:15.344 [2024-09-28 10:36:50.021825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.029240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.029389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:15.344 [2024-09-28 10:36:50.029442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.377 ms 00:16:15.344 [2024-09-28 10:36:50.029473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.033340] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:15.344 [2024-09-28 10:36:50.033509] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:15.344 [2024-09-28 10:36:50.033572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.033594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:15.344 [2024-09-28 10:36:50.033614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.975 ms 00:16:15.344 [2024-09-28 10:36:50.033632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.049506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.049672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:15.344 [2024-09-28 10:36:50.049735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.790 ms 00:16:15.344 [2024-09-28 10:36:50.049759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.052680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.052830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:15.344 [2024-09-28 10:36:50.052884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.816 ms 00:16:15.344 [2024-09-28 10:36:50.052905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.055781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.055956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:15.344 [2024-09-28 10:36:50.055989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:16:15.344 [2024-09-28 10:36:50.055997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.056343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.056360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:15.344 [2024-09-28 10:36:50.056370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:16:15.344 [2024-09-28 10:36:50.056378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.079797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.080060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:15.344 [2024-09-28 10:36:50.080091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.388 ms 00:16:15.344 [2024-09-28 10:36:50.080101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.088701] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:15.344 [2024-09-28 10:36:50.108254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.108311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:15.344 [2024-09-28 10:36:50.108327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.753 ms 00:16:15.344 [2024-09-28 10:36:50.108337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.108444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.108457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:15.344 [2024-09-28 10:36:50.108468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:15.344 [2024-09-28 10:36:50.108479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.108549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.108559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:15.344 [2024-09-28 10:36:50.108568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:15.344 [2024-09-28 10:36:50.108576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.108602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.108612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:15.344 [2024-09-28 10:36:50.108620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:15.344 [2024-09-28 10:36:50.108628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.108672] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:15.344 [2024-09-28 10:36:50.108688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.108702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:15.344 [2024-09-28 10:36:50.108711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:15.344 [2024-09-28 10:36:50.108718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.114930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.115004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:15.344 [2024-09-28 10:36:50.115018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.190 ms 00:16:15.344 [2024-09-28 10:36:50.115027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.344 [2024-09-28 10:36:50.115144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.344 [2024-09-28 10:36:50.115156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:15.344 [2024-09-28 10:36:50.115165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:15.344 [2024-09-28 10:36:50.115174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.605 [2024-09-28 10:36:50.117601] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:15.605 [2024-09-28 10:36:50.120485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.538 ms, result 0 00:16:15.605 [2024-09-28 10:36:50.122250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:15.605 [2024-09-28 10:36:50.129647] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:30.362  Copying: 14/256 [MB] (14 MBps) Copying: 25/256 [MB] (10 MBps) Copying: 54/256 [MB] (29 MBps) Copying: 76/256 [MB] (22 MBps) Copying: 89/256 [MB] (12 MBps) Copying: 106/256 [MB] (16 MBps) Copying: 122/256 [MB] (16 MBps) Copying: 136/256 [MB] (14 MBps) Copying: 154/256 [MB] (18 MBps) Copying: 171/256 [MB] (16 MBps) Copying: 191/256 [MB] (20 MBps) Copying: 207/256 [MB] (15 MBps) Copying: 224/256 [MB] (17 MBps) Copying: 241/256 [MB] (16 MBps) Copying: 256/256 [MB] (average 17 MBps)[2024-09-28 10:37:04.815097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:30.362 [2024-09-28 10:37:04.816834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.817004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:30.362 [2024-09-28 10:37:04.817090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:30.362 [2024-09-28 10:37:04.817155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.817204] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:30.362 [2024-09-28 10:37:04.817777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.817898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:30.362 [2024-09-28 10:37:04.817973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:16:30.362 [2024-09-28 10:37:04.818034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.818314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.818349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:30.362 [2024-09-28 10:37:04.818472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:16:30.362 [2024-09-28 10:37:04.818496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.822223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.822314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:30.362 [2024-09-28 10:37:04.822379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:16:30.362 [2024-09-28 10:37:04.822405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.829416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.829534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:30.362 [2024-09-28 10:37:04.829549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.944 ms 00:16:30.362 [2024-09-28 10:37:04.829558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.831226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.831371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:30.362 [2024-09-28 10:37:04.831386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:16:30.362 [2024-09-28 10:37:04.831394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.835627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.835741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:30.362 [2024-09-28 10:37:04.835799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.198 ms 00:16:30.362 [2024-09-28 10:37:04.835854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.836007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.836042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:30.362 [2024-09-28 10:37:04.836095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:16:30.362 [2024-09-28 10:37:04.836125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.838003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.838113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:30.362 [2024-09-28 10:37:04.838164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.845 ms 00:16:30.362 [2024-09-28 10:37:04.838185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.839603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.839714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:30.362 [2024-09-28 10:37:04.839765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:16:30.362 [2024-09-28 10:37:04.839785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.840939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.841066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:30.362 [2024-09-28 10:37:04.841116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:16:30.362 [2024-09-28 10:37:04.841137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.842424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.362 [2024-09-28 10:37:04.842533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:30.362 [2024-09-28 10:37:04.842584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:16:30.362 [2024-09-28 10:37:04.842605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.362 [2024-09-28 10:37:04.842646] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:30.362 [2024-09-28 10:37:04.842761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.842797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.842825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.842889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.842918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.842947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.843020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.843049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.843077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.843129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.843183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.843236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:30.362 [2024-09-28 10:37:04.843264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.843970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.844982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.845952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:30.363 [2024-09-28 10:37:04.846537] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:30.363 [2024-09-28 10:37:04.846544] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1563e554-b486-4792-9f1b-1b87095bfb08 00:16:30.364 [2024-09-28 10:37:04.846552] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:30.364 [2024-09-28 10:37:04.846558] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:30.364 [2024-09-28 10:37:04.846565] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:30.364 [2024-09-28 10:37:04.846573] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:30.364 [2024-09-28 10:37:04.846580] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:30.364 [2024-09-28 10:37:04.846595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:30.364 [2024-09-28 10:37:04.846603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:30.364 [2024-09-28 10:37:04.846611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:30.364 [2024-09-28 10:37:04.846618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:30.364 [2024-09-28 10:37:04.846625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.364 [2024-09-28 10:37:04.846639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:30.364 [2024-09-28 10:37:04.846648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.980 ms 00:16:30.364 [2024-09-28 10:37:04.846655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.848194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.364 [2024-09-28 10:37:04.848218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:30.364 [2024-09-28 10:37:04.848227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:16:30.364 [2024-09-28 10:37:04.848235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.848323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.364 [2024-09-28 10:37:04.848332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:30.364 [2024-09-28 10:37:04.848340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:30.364 [2024-09-28 10:37:04.848347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.853480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.853515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:30.364 [2024-09-28 10:37:04.853525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.853532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.853601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.853614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:30.364 [2024-09-28 10:37:04.853622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.853629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.853667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.853676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:30.364 [2024-09-28 10:37:04.853687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.853694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.853713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.853723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:30.364 [2024-09-28 10:37:04.853730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.853738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.862991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.863029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:30.364 [2024-09-28 10:37:04.863040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.863048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.870305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.870463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:30.364 [2024-09-28 10:37:04.870478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.870486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.870532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.870541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:30.364 [2024-09-28 10:37:04.870549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.870557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.870584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.870600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:30.364 [2024-09-28 10:37:04.870610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.870618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.870687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.870697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:30.364 [2024-09-28 10:37:04.870708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.870715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.870747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.870756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:30.364 [2024-09-28 10:37:04.870764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.870774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.870811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.870819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:30.364 [2024-09-28 10:37:04.870827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.870834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.870874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.364 [2024-09-28 10:37:04.870883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:30.364 [2024-09-28 10:37:04.870896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.364 [2024-09-28 10:37:04.870904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.364 [2024-09-28 10:37:04.871056] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.221 ms, result 0 00:16:30.364 00:16:30.364 00:16:30.364 10:37:05 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:30.364 10:37:05 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:30.930 10:37:05 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:30.930 [2024-09-28 10:37:05.666243] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:30.931 [2024-09-28 10:37:05.666358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86738 ] 00:16:31.189 [2024-09-28 10:37:05.793786] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:31.189 [2024-09-28 10:37:05.813073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.189 [2024-09-28 10:37:05.844087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.190 [2024-09-28 10:37:05.932388] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:31.190 [2024-09-28 10:37:05.932452] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:31.450 [2024-09-28 10:37:06.084490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.084547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:31.450 [2024-09-28 10:37:06.084560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.450 [2024-09-28 10:37:06.084569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.086774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.086815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:31.450 [2024-09-28 10:37:06.086825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.188 ms 00:16:31.450 [2024-09-28 10:37:06.086832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.086927] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:31.450 [2024-09-28 10:37:06.087191] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:31.450 [2024-09-28 10:37:06.087213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.087221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:31.450 [2024-09-28 10:37:06.087230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:16:31.450 [2024-09-28 10:37:06.087237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.088326] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:31.450 [2024-09-28 10:37:06.090324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.090359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:31.450 [2024-09-28 10:37:06.090369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:16:31.450 [2024-09-28 10:37:06.090383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.090438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.090448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:31.450 [2024-09-28 10:37:06.090455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:31.450 [2024-09-28 10:37:06.090462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.094949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.094993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:31.450 [2024-09-28 10:37:06.095002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.443 ms 00:16:31.450 [2024-09-28 10:37:06.095009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.095108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.095119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:31.450 [2024-09-28 10:37:06.095127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:31.450 [2024-09-28 10:37:06.095134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.095158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.095172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:31.450 [2024-09-28 10:37:06.095184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:31.450 [2024-09-28 10:37:06.095191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.095211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:31.450 [2024-09-28 10:37:06.096473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.096501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:31.450 [2024-09-28 10:37:06.096509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:16:31.450 [2024-09-28 10:37:06.096516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.096553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.096561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:31.450 [2024-09-28 10:37:06.096575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:31.450 [2024-09-28 10:37:06.096583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.096599] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:31.450 [2024-09-28 10:37:06.096616] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:31.450 [2024-09-28 10:37:06.096648] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:31.450 [2024-09-28 10:37:06.096667] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:31.450 [2024-09-28 10:37:06.096767] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:31.450 [2024-09-28 10:37:06.096781] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:31.450 [2024-09-28 10:37:06.096790] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:31.450 [2024-09-28 10:37:06.096800] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:31.450 [2024-09-28 10:37:06.096809] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:31.450 [2024-09-28 10:37:06.096816] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:31.450 [2024-09-28 10:37:06.096823] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:31.450 [2024-09-28 10:37:06.096831] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:31.450 [2024-09-28 10:37:06.096838] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:31.450 [2024-09-28 10:37:06.096845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.096856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:31.450 [2024-09-28 10:37:06.096864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:16:31.450 [2024-09-28 10:37:06.096871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.096974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.450 [2024-09-28 10:37:06.096983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:31.450 [2024-09-28 10:37:06.096992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:31.450 [2024-09-28 10:37:06.096999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.450 [2024-09-28 10:37:06.097096] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:31.450 [2024-09-28 10:37:06.097144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:31.450 [2024-09-28 10:37:06.097158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:31.450 [2024-09-28 10:37:06.097167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.450 [2024-09-28 10:37:06.097176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:31.450 [2024-09-28 10:37:06.097184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:31.450 [2024-09-28 10:37:06.097197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:31.450 [2024-09-28 10:37:06.097205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:31.450 [2024-09-28 10:37:06.097216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:31.450 [2024-09-28 10:37:06.097223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:31.450 [2024-09-28 10:37:06.097231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:31.450 [2024-09-28 10:37:06.097238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:31.450 [2024-09-28 10:37:06.097245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:31.450 [2024-09-28 10:37:06.097253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:31.450 [2024-09-28 10:37:06.097261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:31.451 [2024-09-28 10:37:06.097269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.451 [2024-09-28 10:37:06.097277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:31.451 [2024-09-28 10:37:06.097284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:31.451 [2024-09-28 10:37:06.097292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.451 [2024-09-28 10:37:06.097299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:31.451 [2024-09-28 10:37:06.097307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:31.451 [2024-09-28 10:37:06.097314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:31.451 [2024-09-28 10:37:06.097322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:31.451 [2024-09-28 10:37:06.097330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:31.451 [2024-09-28 10:37:06.097341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:31.451 [2024-09-28 10:37:06.097348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:31.451 [2024-09-28 10:37:06.097356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:31.451 [2024-09-28 10:37:06.097363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:31.451 [2024-09-28 10:37:06.097370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:31.451 [2024-09-28 10:37:06.097378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:31.451 [2024-09-28 10:37:06.097385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:31.451 [2024-09-28 10:37:06.097392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:31.451 [2024-09-28 10:37:06.097400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:31.451 [2024-09-28 10:37:06.097407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:31.451 [2024-09-28 10:37:06.097415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:31.451 [2024-09-28 10:37:06.097422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:31.451 [2024-09-28 10:37:06.097430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:31.451 [2024-09-28 10:37:06.097438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:31.451 [2024-09-28 10:37:06.097447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:31.451 [2024-09-28 10:37:06.097454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.451 [2024-09-28 10:37:06.097463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:31.451 [2024-09-28 10:37:06.097470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:31.451 [2024-09-28 10:37:06.097478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.451 [2024-09-28 10:37:06.097485] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:31.451 [2024-09-28 10:37:06.097493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:31.451 [2024-09-28 10:37:06.097501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:31.451 [2024-09-28 10:37:06.097510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:31.451 [2024-09-28 10:37:06.097519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:31.451 [2024-09-28 10:37:06.097527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:31.451 [2024-09-28 10:37:06.097535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:31.451 [2024-09-28 10:37:06.097542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:31.451 [2024-09-28 10:37:06.097550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:31.451 [2024-09-28 10:37:06.097557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:31.451 [2024-09-28 10:37:06.097566] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:31.451 [2024-09-28 10:37:06.097577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:31.451 [2024-09-28 10:37:06.097585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:31.451 [2024-09-28 10:37:06.097596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:31.451 [2024-09-28 10:37:06.097603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:31.451 [2024-09-28 10:37:06.097609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:31.451 [2024-09-28 10:37:06.097616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:31.451 [2024-09-28 10:37:06.097623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:31.451 [2024-09-28 10:37:06.097630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:31.451 [2024-09-28 10:37:06.097637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:31.451 [2024-09-28 10:37:06.097644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:31.451 [2024-09-28 10:37:06.097650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:31.451 [2024-09-28 10:37:06.097657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:31.451 [2024-09-28 10:37:06.097665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:31.451 [2024-09-28 10:37:06.097672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:31.451 [2024-09-28 10:37:06.097680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:31.451 [2024-09-28 10:37:06.097687] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:31.451 [2024-09-28 10:37:06.097695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:31.451 [2024-09-28 10:37:06.097702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:31.451 [2024-09-28 10:37:06.097711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:31.451 [2024-09-28 10:37:06.097718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:31.451 [2024-09-28 10:37:06.097726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:31.451 [2024-09-28 10:37:06.097733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.451 [2024-09-28 10:37:06.097743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:31.451 [2024-09-28 10:37:06.097750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:16:31.451 [2024-09-28 10:37:06.097757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.451 [2024-09-28 10:37:06.118399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.451 [2024-09-28 10:37:06.118502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.451 [2024-09-28 10:37:06.118541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.586 ms 00:16:31.451 [2024-09-28 10:37:06.118569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.451 [2024-09-28 10:37:06.118953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.451 [2024-09-28 10:37:06.119031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:31.451 [2024-09-28 10:37:06.119068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:16:31.451 [2024-09-28 10:37:06.119090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.451 [2024-09-28 10:37:06.130234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.451 [2024-09-28 10:37:06.130267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.451 [2024-09-28 10:37:06.130282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.061 ms 00:16:31.451 [2024-09-28 10:37:06.130290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.451 [2024-09-28 10:37:06.130351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.451 [2024-09-28 10:37:06.130363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.451 [2024-09-28 10:37:06.130372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.451 [2024-09-28 10:37:06.130379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.451 [2024-09-28 10:37:06.130669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.451 [2024-09-28 10:37:06.130695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.451 [2024-09-28 10:37:06.130710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:31.451 [2024-09-28 10:37:06.130718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.451 [2024-09-28 10:37:06.130842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.451 [2024-09-28 10:37:06.130851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.451 [2024-09-28 10:37:06.130861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:16:31.451 [2024-09-28 10:37:06.130869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.451 [2024-09-28 10:37:06.135373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.451 [2024-09-28 10:37:06.135400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.451 [2024-09-28 10:37:06.135409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.482 ms 00:16:31.451 [2024-09-28 10:37:06.135416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.451 [2024-09-28 10:37:06.137480] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:31.451 [2024-09-28 10:37:06.137603] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:31.451 [2024-09-28 10:37:06.137617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.451 [2024-09-28 10:37:06.137625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:31.451 [2024-09-28 10:37:06.137633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:16:31.451 [2024-09-28 10:37:06.137640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.451 [2024-09-28 10:37:06.160626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.451 [2024-09-28 10:37:06.160772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:31.451 [2024-09-28 10:37:06.160790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.945 ms 00:16:31.451 [2024-09-28 10:37:06.160799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.162481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.162510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:31.452 [2024-09-28 10:37:06.162519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:16:31.452 [2024-09-28 10:37:06.162526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.163847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.163957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:31.452 [2024-09-28 10:37:06.163981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:16:31.452 [2024-09-28 10:37:06.163994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.164311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.164328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:31.452 [2024-09-28 10:37:06.164341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:16:31.452 [2024-09-28 10:37:06.164348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.178831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.178998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:31.452 [2024-09-28 10:37:06.179014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.463 ms 00:16:31.452 [2024-09-28 10:37:06.179022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.186317] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:31.452 [2024-09-28 10:37:06.199904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.199936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:31.452 [2024-09-28 10:37:06.199948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.806 ms 00:16:31.452 [2024-09-28 10:37:06.199955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.200041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.200052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:31.452 [2024-09-28 10:37:06.200061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:31.452 [2024-09-28 10:37:06.200071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.200113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.200121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:31.452 [2024-09-28 10:37:06.200134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:31.452 [2024-09-28 10:37:06.200144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.200166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.200178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:31.452 [2024-09-28 10:37:06.200185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:31.452 [2024-09-28 10:37:06.200192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.200221] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:31.452 [2024-09-28 10:37:06.200230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.200238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:31.452 [2024-09-28 10:37:06.200245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:31.452 [2024-09-28 10:37:06.200252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.203644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.203678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:31.452 [2024-09-28 10:37:06.203688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:16:31.452 [2024-09-28 10:37:06.203696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.203774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.452 [2024-09-28 10:37:06.203784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:31.452 [2024-09-28 10:37:06.203792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:31.452 [2024-09-28 10:37:06.203799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.452 [2024-09-28 10:37:06.204589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:31.452 [2024-09-28 10:37:06.205567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.841 ms, result 0 00:16:31.452 [2024-09-28 10:37:06.206296] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.452 [2024-09-28 10:37:06.216414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:31.712  Copying: 4096/4096 [kB] (average 39 MBps)[2024-09-28 10:37:06.319366] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.712 [2024-09-28 10:37:06.320057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.712 [2024-09-28 10:37:06.320088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:31.712 [2024-09-28 10:37:06.320099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.712 [2024-09-28 10:37:06.320107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.712 [2024-09-28 10:37:06.320127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:31.712 [2024-09-28 10:37:06.320528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.712 [2024-09-28 10:37:06.320557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:31.713 [2024-09-28 10:37:06.320566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:16:31.713 [2024-09-28 10:37:06.320572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.322033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.713 [2024-09-28 10:37:06.322069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:31.713 [2024-09-28 10:37:06.322078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:16:31.713 [2024-09-28 10:37:06.322088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.325835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.713 [2024-09-28 10:37:06.325940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:31.713 [2024-09-28 10:37:06.325953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.732 ms 00:16:31.713 [2024-09-28 10:37:06.325974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.332972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.713 [2024-09-28 10:37:06.333059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:31.713 [2024-09-28 10:37:06.333110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.971 ms 00:16:31.713 [2024-09-28 10:37:06.333131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.334371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.713 [2024-09-28 10:37:06.334468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:31.713 [2024-09-28 10:37:06.334517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:16:31.713 [2024-09-28 10:37:06.334537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.338085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.713 [2024-09-28 10:37:06.338188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:31.713 [2024-09-28 10:37:06.338237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.444 ms 00:16:31.713 [2024-09-28 10:37:06.338258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.338412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.713 [2024-09-28 10:37:06.338472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:31.713 [2024-09-28 10:37:06.338529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:31.713 [2024-09-28 10:37:06.338550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.340327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.713 [2024-09-28 10:37:06.340421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:31.713 [2024-09-28 10:37:06.340468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:16:31.713 [2024-09-28 10:37:06.340489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.341603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.713 [2024-09-28 10:37:06.341699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:31.713 [2024-09-28 10:37:06.341744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:16:31.713 [2024-09-28 10:37:06.341764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.342887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.713 [2024-09-28 10:37:06.343001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:31.713 [2024-09-28 10:37:06.343052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:16:31.713 [2024-09-28 10:37:06.343101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.344164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.713 [2024-09-28 10:37:06.344263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:31.713 [2024-09-28 10:37:06.344313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:16:31.713 [2024-09-28 10:37:06.344336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.713 [2024-09-28 10:37:06.344373] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:31.713 [2024-09-28 10:37:06.344639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.344749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.344810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.344841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.344910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.344941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.345976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:31.713 [2024-09-28 10:37:06.346987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.346994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:31.714 [2024-09-28 10:37:06.347332] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:31.714 [2024-09-28 10:37:06.347339] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1563e554-b486-4792-9f1b-1b87095bfb08 00:16:31.714 [2024-09-28 10:37:06.347355] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:31.714 [2024-09-28 10:37:06.347362] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:31.714 [2024-09-28 10:37:06.347369] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:31.714 [2024-09-28 10:37:06.347376] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:31.714 [2024-09-28 10:37:06.347383] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:31.714 [2024-09-28 10:37:06.347398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:31.714 [2024-09-28 10:37:06.347405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:31.714 [2024-09-28 10:37:06.347411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:31.714 [2024-09-28 10:37:06.347417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:31.714 [2024-09-28 10:37:06.347426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.714 [2024-09-28 10:37:06.347438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:31.714 [2024-09-28 10:37:06.347447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:16:31.714 [2024-09-28 10:37:06.347454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.714 [2024-09-28 10:37:06.348755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.714 [2024-09-28 10:37:06.348777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:31.714 [2024-09-28 10:37:06.348789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:16:31.714 [2024-09-28 10:37:06.348797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.714 [2024-09-28 10:37:06.348877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.714 [2024-09-28 10:37:06.348885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:31.714 [2024-09-28 10:37:06.348893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:31.714 [2024-09-28 10:37:06.348900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.714 [2024-09-28 10:37:06.353393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.714 [2024-09-28 10:37:06.353500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.714 [2024-09-28 10:37:06.353549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.714 [2024-09-28 10:37:06.353591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.714 [2024-09-28 10:37:06.353677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.714 [2024-09-28 10:37:06.353700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.714 [2024-09-28 10:37:06.353743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.714 [2024-09-28 10:37:06.353764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.714 [2024-09-28 10:37:06.353823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.714 [2024-09-28 10:37:06.353848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.714 [2024-09-28 10:37:06.353867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.714 [2024-09-28 10:37:06.353887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.714 [2024-09-28 10:37:06.353915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.714 [2024-09-28 10:37:06.353937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.714 [2024-09-28 10:37:06.354018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.714 [2024-09-28 10:37:06.354042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.714 [2024-09-28 10:37:06.362376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.714 [2024-09-28 10:37:06.362512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.714 [2024-09-28 10:37:06.362561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.714 [2024-09-28 10:37:06.362603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.714 [2024-09-28 10:37:06.369295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.714 [2024-09-28 10:37:06.369416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.714 [2024-09-28 10:37:06.369465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.714 [2024-09-28 10:37:06.369487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.714 [2024-09-28 10:37:06.369542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.714 [2024-09-28 10:37:06.369636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:31.714 [2024-09-28 10:37:06.369660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.714 [2024-09-28 10:37:06.369679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.715 [2024-09-28 10:37:06.369727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.715 [2024-09-28 10:37:06.369824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:31.715 [2024-09-28 10:37:06.369853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.715 [2024-09-28 10:37:06.369872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.715 [2024-09-28 10:37:06.369950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.715 [2024-09-28 10:37:06.369999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:31.715 [2024-09-28 10:37:06.370021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.715 [2024-09-28 10:37:06.370040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.715 [2024-09-28 10:37:06.370120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.715 [2024-09-28 10:37:06.370174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:31.715 [2024-09-28 10:37:06.370217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.715 [2024-09-28 10:37:06.370244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.715 [2024-09-28 10:37:06.370293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.715 [2024-09-28 10:37:06.370376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:31.715 [2024-09-28 10:37:06.370399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.715 [2024-09-28 10:37:06.370425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.715 [2024-09-28 10:37:06.370481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.715 [2024-09-28 10:37:06.370505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:31.715 [2024-09-28 10:37:06.370528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.715 [2024-09-28 10:37:06.370536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.715 [2024-09-28 10:37:06.370664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.586 ms, result 0 00:16:31.973 00:16:31.973 00:16:31.973 10:37:06 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:31.973 10:37:06 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=86751 00:16:31.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.973 10:37:06 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 86751 00:16:31.973 10:37:06 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86751 ']' 00:16:31.973 10:37:06 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.973 10:37:06 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.973 10:37:06 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.973 10:37:06 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.973 10:37:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:31.973 [2024-09-28 10:37:06.617312] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:31.973 [2024-09-28 10:37:06.617549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86751 ] 00:16:31.973 [2024-09-28 10:37:06.745364] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:32.232 [2024-09-28 10:37:06.766500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.232 [2024-09-28 10:37:06.797638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.799 10:37:07 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.799 10:37:07 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:32.799 10:37:07 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:33.058 [2024-09-28 10:37:07.668992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.058 [2024-09-28 10:37:07.669054] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.318 [2024-09-28 10:37:07.837622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.318 [2024-09-28 10:37:07.837671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:33.318 [2024-09-28 10:37:07.837686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:33.318 [2024-09-28 10:37:07.837694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.318 [2024-09-28 10:37:07.839906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.318 [2024-09-28 10:37:07.839942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.318 [2024-09-28 10:37:07.839953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.192 ms 00:16:33.318 [2024-09-28 10:37:07.839974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.318 [2024-09-28 10:37:07.840045] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:33.318 [2024-09-28 10:37:07.840272] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:33.318 [2024-09-28 10:37:07.840287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.318 [2024-09-28 10:37:07.840294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.318 [2024-09-28 10:37:07.840304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:16:33.318 [2024-09-28 10:37:07.840312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.318 [2024-09-28 10:37:07.841555] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:33.318 [2024-09-28 10:37:07.843710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.318 [2024-09-28 10:37:07.843747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:33.318 [2024-09-28 10:37:07.843756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:16:33.318 [2024-09-28 10:37:07.843766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.318 [2024-09-28 10:37:07.843824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.318 [2024-09-28 10:37:07.843837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:33.318 [2024-09-28 10:37:07.843845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:33.318 [2024-09-28 10:37:07.843854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.318 [2024-09-28 10:37:07.848434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.318 [2024-09-28 10:37:07.848465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.318 [2024-09-28 10:37:07.848474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.533 ms 00:16:33.318 [2024-09-28 10:37:07.848483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.318 [2024-09-28 10:37:07.848586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.318 [2024-09-28 10:37:07.848599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.318 [2024-09-28 10:37:07.848607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:33.319 [2024-09-28 10:37:07.848615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.319 [2024-09-28 10:37:07.848642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.319 [2024-09-28 10:37:07.848653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:33.319 [2024-09-28 10:37:07.848664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:33.319 [2024-09-28 10:37:07.848673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.319 [2024-09-28 10:37:07.848695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:33.319 [2024-09-28 10:37:07.849987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.319 [2024-09-28 10:37:07.850012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.319 [2024-09-28 10:37:07.850022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:16:33.319 [2024-09-28 10:37:07.850031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.319 [2024-09-28 10:37:07.850067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.319 [2024-09-28 10:37:07.850075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:33.319 [2024-09-28 10:37:07.850085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:33.319 [2024-09-28 10:37:07.850092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.319 [2024-09-28 10:37:07.850112] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:33.319 [2024-09-28 10:37:07.850131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:33.319 [2024-09-28 10:37:07.850171] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:33.319 [2024-09-28 10:37:07.850188] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:33.319 [2024-09-28 10:37:07.850292] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:33.319 [2024-09-28 10:37:07.850302] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:33.319 [2024-09-28 10:37:07.850313] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:33.319 [2024-09-28 10:37:07.850323] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:33.319 [2024-09-28 10:37:07.850334] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:33.319 [2024-09-28 10:37:07.850342] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:33.319 [2024-09-28 10:37:07.850351] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:33.319 [2024-09-28 10:37:07.850358] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:33.319 [2024-09-28 10:37:07.850366] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:33.319 [2024-09-28 10:37:07.850375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.319 [2024-09-28 10:37:07.850384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:33.319 [2024-09-28 10:37:07.850391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:33.319 [2024-09-28 10:37:07.850399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.319 [2024-09-28 10:37:07.850485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.319 [2024-09-28 10:37:07.850494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:33.319 [2024-09-28 10:37:07.850502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:33.319 [2024-09-28 10:37:07.850510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.319 [2024-09-28 10:37:07.850607] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:33.319 [2024-09-28 10:37:07.850621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:33.319 [2024-09-28 10:37:07.850630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.319 [2024-09-28 10:37:07.850641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.319 [2024-09-28 10:37:07.850649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:33.319 [2024-09-28 10:37:07.850660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:33.319 [2024-09-28 10:37:07.850668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:33.319 [2024-09-28 10:37:07.850678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:33.319 [2024-09-28 10:37:07.850686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:33.319 [2024-09-28 10:37:07.850695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.319 [2024-09-28 10:37:07.850703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:33.319 [2024-09-28 10:37:07.850711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:33.319 [2024-09-28 10:37:07.850719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.319 [2024-09-28 10:37:07.850733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:33.319 [2024-09-28 10:37:07.850741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:33.319 [2024-09-28 10:37:07.850750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.319 [2024-09-28 10:37:07.850758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:33.319 [2024-09-28 10:37:07.850768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:33.319 [2024-09-28 10:37:07.850775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.319 [2024-09-28 10:37:07.850786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:33.319 [2024-09-28 10:37:07.850794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:33.319 [2024-09-28 10:37:07.850802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.319 [2024-09-28 10:37:07.850810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:33.319 [2024-09-28 10:37:07.850819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:33.319 [2024-09-28 10:37:07.850826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.319 [2024-09-28 10:37:07.850835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:33.319 [2024-09-28 10:37:07.850843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:33.319 [2024-09-28 10:37:07.850852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.319 [2024-09-28 10:37:07.850859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:33.319 [2024-09-28 10:37:07.850868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:33.319 [2024-09-28 10:37:07.850876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.319 [2024-09-28 10:37:07.850886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:33.319 [2024-09-28 10:37:07.850894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:33.319 [2024-09-28 10:37:07.850903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.319 [2024-09-28 10:37:07.850910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:33.319 [2024-09-28 10:37:07.850920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:33.319 [2024-09-28 10:37:07.850927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.319 [2024-09-28 10:37:07.850936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:33.319 [2024-09-28 10:37:07.850944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:33.319 [2024-09-28 10:37:07.850952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.319 [2024-09-28 10:37:07.851213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:33.319 [2024-09-28 10:37:07.851257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:33.319 [2024-09-28 10:37:07.851277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.319 [2024-09-28 10:37:07.851296] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:33.319 [2024-09-28 10:37:07.851369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:33.319 [2024-09-28 10:37:07.851395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.319 [2024-09-28 10:37:07.851418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.319 [2024-09-28 10:37:07.851438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:33.319 [2024-09-28 10:37:07.851459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:33.319 [2024-09-28 10:37:07.851508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:33.319 [2024-09-28 10:37:07.851553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:33.319 [2024-09-28 10:37:07.851578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:33.319 [2024-09-28 10:37:07.851624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:33.319 [2024-09-28 10:37:07.851650] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:33.319 [2024-09-28 10:37:07.851683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.319 [2024-09-28 10:37:07.851872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:33.319 [2024-09-28 10:37:07.851905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:33.319 [2024-09-28 10:37:07.851935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:33.319 [2024-09-28 10:37:07.851975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:33.319 [2024-09-28 10:37:07.852119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:33.319 [2024-09-28 10:37:07.852149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:33.319 [2024-09-28 10:37:07.852179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:33.319 [2024-09-28 10:37:07.852241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:33.319 [2024-09-28 10:37:07.852275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:33.319 [2024-09-28 10:37:07.852303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:33.320 [2024-09-28 10:37:07.852324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:33.320 [2024-09-28 10:37:07.852332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:33.320 [2024-09-28 10:37:07.852343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:33.320 [2024-09-28 10:37:07.852351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:33.320 [2024-09-28 10:37:07.852359] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:33.320 [2024-09-28 10:37:07.852368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.320 [2024-09-28 10:37:07.852380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:33.320 [2024-09-28 10:37:07.852387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:33.320 [2024-09-28 10:37:07.852396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:33.320 [2024-09-28 10:37:07.852403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:33.320 [2024-09-28 10:37:07.852414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.852421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:33.320 [2024-09-28 10:37:07.852431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.872 ms 00:16:33.320 [2024-09-28 10:37:07.852438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.860820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.860853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.320 [2024-09-28 10:37:07.860864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.293 ms 00:16:33.320 [2024-09-28 10:37:07.860872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.861003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.861013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:33.320 [2024-09-28 10:37:07.861023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:33.320 [2024-09-28 10:37:07.861030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.868859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.868892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.320 [2024-09-28 10:37:07.868904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.807 ms 00:16:33.320 [2024-09-28 10:37:07.868914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.868986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.868996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.320 [2024-09-28 10:37:07.869008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:33.320 [2024-09-28 10:37:07.869016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.869313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.869340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.320 [2024-09-28 10:37:07.869350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:16:33.320 [2024-09-28 10:37:07.869358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.869483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.869496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.320 [2024-09-28 10:37:07.869506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:33.320 [2024-09-28 10:37:07.869519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.882071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.882110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.320 [2024-09-28 10:37:07.882124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.527 ms 00:16:33.320 [2024-09-28 10:37:07.882133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.884371] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:33.320 [2024-09-28 10:37:07.884501] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:33.320 [2024-09-28 10:37:07.884520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.884528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:33.320 [2024-09-28 10:37:07.884538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:16:33.320 [2024-09-28 10:37:07.884546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.899107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.899223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:33.320 [2024-09-28 10:37:07.899244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.518 ms 00:16:33.320 [2024-09-28 10:37:07.899252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.900830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.900860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:33.320 [2024-09-28 10:37:07.900871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.506 ms 00:16:33.320 [2024-09-28 10:37:07.900878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.902339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.902444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:33.320 [2024-09-28 10:37:07.902461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:16:33.320 [2024-09-28 10:37:07.902468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.902780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.902796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:33.320 [2024-09-28 10:37:07.902806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:16:33.320 [2024-09-28 10:37:07.902813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.917717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.917857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:33.320 [2024-09-28 10:37:07.917880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.880 ms 00:16:33.320 [2024-09-28 10:37:07.917888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.925267] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:33.320 [2024-09-28 10:37:07.938778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.938814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:33.320 [2024-09-28 10:37:07.938825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.837 ms 00:16:33.320 [2024-09-28 10:37:07.938842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.938931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.938944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:33.320 [2024-09-28 10:37:07.938954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:33.320 [2024-09-28 10:37:07.938986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.939041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.939054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:33.320 [2024-09-28 10:37:07.939065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:33.320 [2024-09-28 10:37:07.939074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.939097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.939108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:33.320 [2024-09-28 10:37:07.939116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:33.320 [2024-09-28 10:37:07.939127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.939155] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:33.320 [2024-09-28 10:37:07.939166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.939173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:33.320 [2024-09-28 10:37:07.939182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:33.320 [2024-09-28 10:37:07.939189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.942496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.942528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:33.320 [2024-09-28 10:37:07.942540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:16:33.320 [2024-09-28 10:37:07.942549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.942621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.320 [2024-09-28 10:37:07.942630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:33.320 [2024-09-28 10:37:07.942640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:33.320 [2024-09-28 10:37:07.942647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.320 [2024-09-28 10:37:07.943491] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:33.320 [2024-09-28 10:37:07.944483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.609 ms, result 0 00:16:33.320 [2024-09-28 10:37:07.945459] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:33.320 Some configs were skipped because the RPC state that can call them passed over. 00:16:33.320 10:37:07 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:33.581 [2024-09-28 10:37:08.169332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.581 [2024-09-28 10:37:08.169497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:33.581 [2024-09-28 10:37:08.169555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:16:33.581 [2024-09-28 10:37:08.169581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.581 [2024-09-28 10:37:08.169629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.101 ms, result 0 00:16:33.581 true 00:16:33.581 10:37:08 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:33.845 [2024-09-28 10:37:08.370189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.845 [2024-09-28 10:37:08.370234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:33.845 [2024-09-28 10:37:08.370249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:16:33.845 [2024-09-28 10:37:08.370257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.845 [2024-09-28 10:37:08.370295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.530 ms, result 0 00:16:33.845 true 00:16:33.845 10:37:08 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 86751 00:16:33.845 10:37:08 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86751 ']' 00:16:33.845 10:37:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86751 00:16:33.845 10:37:08 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:33.845 10:37:08 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.845 10:37:08 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86751 00:16:33.845 killing process with pid 86751 00:16:33.845 10:37:08 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.845 10:37:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.845 10:37:08 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86751' 00:16:33.845 10:37:08 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86751 00:16:33.845 10:37:08 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86751 00:16:33.845 [2024-09-28 10:37:08.544350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.845 [2024-09-28 10:37:08.544413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:33.845 [2024-09-28 10:37:08.544431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:33.845 [2024-09-28 10:37:08.544443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.845 [2024-09-28 10:37:08.544469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:33.845 [2024-09-28 10:37:08.545096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.845 [2024-09-28 10:37:08.545124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:33.845 [2024-09-28 10:37:08.545138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:16:33.845 [2024-09-28 10:37:08.545147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.845 [2024-09-28 10:37:08.545445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.845 [2024-09-28 10:37:08.545468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:33.845 [2024-09-28 10:37:08.545480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:33.845 [2024-09-28 10:37:08.545489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.845 [2024-09-28 10:37:08.550052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.845 [2024-09-28 10:37:08.550091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:33.845 [2024-09-28 10:37:08.550105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.526 ms 00:16:33.845 [2024-09-28 10:37:08.550119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.845 [2024-09-28 10:37:08.557117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.845 [2024-09-28 10:37:08.557150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:33.845 [2024-09-28 10:37:08.557165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.957 ms 00:16:33.845 [2024-09-28 10:37:08.557173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.845 [2024-09-28 10:37:08.559722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.845 [2024-09-28 10:37:08.559906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:33.845 [2024-09-28 10:37:08.559926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:16:33.845 [2024-09-28 10:37:08.559933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.845 [2024-09-28 10:37:08.564361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.845 [2024-09-28 10:37:08.564397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:33.845 [2024-09-28 10:37:08.564409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.370 ms 00:16:33.845 [2024-09-28 10:37:08.564420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.846 [2024-09-28 10:37:08.564551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.846 [2024-09-28 10:37:08.564561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:33.846 [2024-09-28 10:37:08.564577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:33.846 [2024-09-28 10:37:08.564584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.846 [2024-09-28 10:37:08.567450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.846 [2024-09-28 10:37:08.567482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:33.846 [2024-09-28 10:37:08.567496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:16:33.846 [2024-09-28 10:37:08.567503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.846 [2024-09-28 10:37:08.569813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.846 [2024-09-28 10:37:08.569845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:33.846 [2024-09-28 10:37:08.569856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.255 ms 00:16:33.846 [2024-09-28 10:37:08.569863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.846 [2024-09-28 10:37:08.571516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.846 [2024-09-28 10:37:08.571548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:33.846 [2024-09-28 10:37:08.571559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:16:33.846 [2024-09-28 10:37:08.571566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.846 [2024-09-28 10:37:08.573471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.846 [2024-09-28 10:37:08.573594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:33.846 [2024-09-28 10:37:08.573613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.836 ms 00:16:33.846 [2024-09-28 10:37:08.573621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.846 [2024-09-28 10:37:08.573655] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:33.846 [2024-09-28 10:37:08.573670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.573998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:33.846 [2024-09-28 10:37:08.574324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:33.847 [2024-09-28 10:37:08.574575] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:33.847 [2024-09-28 10:37:08.574585] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1563e554-b486-4792-9f1b-1b87095bfb08 00:16:33.847 [2024-09-28 10:37:08.574593] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:33.847 [2024-09-28 10:37:08.574606] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:33.847 [2024-09-28 10:37:08.574613] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:33.847 [2024-09-28 10:37:08.574622] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:33.847 [2024-09-28 10:37:08.574638] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:33.847 [2024-09-28 10:37:08.574648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:33.847 [2024-09-28 10:37:08.574655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:33.847 [2024-09-28 10:37:08.574665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:33.847 [2024-09-28 10:37:08.574672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:33.847 [2024-09-28 10:37:08.574681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.847 [2024-09-28 10:37:08.574693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:33.847 [2024-09-28 10:37:08.574706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:16:33.847 [2024-09-28 10:37:08.574714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.576721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.847 [2024-09-28 10:37:08.576832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:33.847 [2024-09-28 10:37:08.576851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.987 ms 00:16:33.847 [2024-09-28 10:37:08.576861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.576980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.847 [2024-09-28 10:37:08.576991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:33.847 [2024-09-28 10:37:08.577003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:16:33.847 [2024-09-28 10:37:08.577011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.584062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.584094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.847 [2024-09-28 10:37:08.584107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.584114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.584199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.584208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.847 [2024-09-28 10:37:08.584220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.584228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.584283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.584294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.847 [2024-09-28 10:37:08.584303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.584311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.584331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.584340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.847 [2024-09-28 10:37:08.584349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.584356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.596702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.596740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.847 [2024-09-28 10:37:08.596753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.596760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.606624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.606801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.847 [2024-09-28 10:37:08.606824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.606834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.606893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.606905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.847 [2024-09-28 10:37:08.606916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.606924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.607023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.607033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.847 [2024-09-28 10:37:08.607042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.607050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.607133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.607144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.847 [2024-09-28 10:37:08.607156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.607164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.607199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.607209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:33.847 [2024-09-28 10:37:08.607221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.607229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.607272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.607286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.847 [2024-09-28 10:37:08.607298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.607306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.607369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.847 [2024-09-28 10:37:08.607380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.847 [2024-09-28 10:37:08.607390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.847 [2024-09-28 10:37:08.607399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.847 [2024-09-28 10:37:08.607550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.168 ms, result 0 00:16:34.109 10:37:08 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:34.372 [2024-09-28 10:37:08.885344] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:34.372 [2024-09-28 10:37:08.885660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86788 ] 00:16:34.372 [2024-09-28 10:37:09.017591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:34.372 [2024-09-28 10:37:09.039154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.372 [2024-09-28 10:37:09.092484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.635 [2024-09-28 10:37:09.208089] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:34.635 [2024-09-28 10:37:09.208175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:34.635 [2024-09-28 10:37:09.369294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.369351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:34.635 [2024-09-28 10:37:09.369366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:34.635 [2024-09-28 10:37:09.369375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.371953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.372030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:34.635 [2024-09-28 10:37:09.372042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:16:34.635 [2024-09-28 10:37:09.372049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.372165] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:34.635 [2024-09-28 10:37:09.372424] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:34.635 [2024-09-28 10:37:09.372448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.372458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:34.635 [2024-09-28 10:37:09.372467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:16:34.635 [2024-09-28 10:37:09.372479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.374540] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:34.635 [2024-09-28 10:37:09.378568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.378625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:34.635 [2024-09-28 10:37:09.378640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.030 ms 00:16:34.635 [2024-09-28 10:37:09.378649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.379202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.379313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:34.635 [2024-09-28 10:37:09.379372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:16:34.635 [2024-09-28 10:37:09.379395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.389276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.389503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:34.635 [2024-09-28 10:37:09.389536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.751 ms 00:16:34.635 [2024-09-28 10:37:09.389556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.389731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.389748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:34.635 [2024-09-28 10:37:09.389760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:34.635 [2024-09-28 10:37:09.389771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.389809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.389825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:34.635 [2024-09-28 10:37:09.389838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:34.635 [2024-09-28 10:37:09.389848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.389879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:34.635 [2024-09-28 10:37:09.392205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.392251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:34.635 [2024-09-28 10:37:09.392274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.334 ms 00:16:34.635 [2024-09-28 10:37:09.392292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.392353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.392370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:34.635 [2024-09-28 10:37:09.392385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:34.635 [2024-09-28 10:37:09.392396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.392422] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:34.635 [2024-09-28 10:37:09.392449] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:34.635 [2024-09-28 10:37:09.392505] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:34.635 [2024-09-28 10:37:09.392532] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:34.635 [2024-09-28 10:37:09.392685] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:34.635 [2024-09-28 10:37:09.392700] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:34.635 [2024-09-28 10:37:09.392714] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:34.635 [2024-09-28 10:37:09.392729] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:34.635 [2024-09-28 10:37:09.392741] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:34.635 [2024-09-28 10:37:09.392752] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:34.635 [2024-09-28 10:37:09.392763] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:34.635 [2024-09-28 10:37:09.392773] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:34.635 [2024-09-28 10:37:09.392784] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:34.635 [2024-09-28 10:37:09.392798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.392812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:34.635 [2024-09-28 10:37:09.392834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:16:34.635 [2024-09-28 10:37:09.392844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.393201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.635 [2024-09-28 10:37:09.393256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:34.635 [2024-09-28 10:37:09.393287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:16:34.635 [2024-09-28 10:37:09.393315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.635 [2024-09-28 10:37:09.393489] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:34.635 [2024-09-28 10:37:09.393524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:34.635 [2024-09-28 10:37:09.393561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.635 [2024-09-28 10:37:09.393601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.635 [2024-09-28 10:37:09.393776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:34.635 [2024-09-28 10:37:09.393810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:34.635 [2024-09-28 10:37:09.393957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:34.635 [2024-09-28 10:37:09.394018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:34.635 [2024-09-28 10:37:09.394085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:34.635 [2024-09-28 10:37:09.394117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.635 [2024-09-28 10:37:09.394145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:34.635 [2024-09-28 10:37:09.394214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:34.635 [2024-09-28 10:37:09.394245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.635 [2024-09-28 10:37:09.394271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:34.635 [2024-09-28 10:37:09.394299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:34.635 [2024-09-28 10:37:09.394353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.635 [2024-09-28 10:37:09.394368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:34.635 [2024-09-28 10:37:09.394379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:34.636 [2024-09-28 10:37:09.394388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.636 [2024-09-28 10:37:09.394399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:34.636 [2024-09-28 10:37:09.394408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:34.636 [2024-09-28 10:37:09.394418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.636 [2024-09-28 10:37:09.394428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:34.636 [2024-09-28 10:37:09.394445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:34.636 [2024-09-28 10:37:09.394455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.636 [2024-09-28 10:37:09.394464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:34.636 [2024-09-28 10:37:09.394474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:34.636 [2024-09-28 10:37:09.394483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.636 [2024-09-28 10:37:09.394492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:34.636 [2024-09-28 10:37:09.394502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:34.636 [2024-09-28 10:37:09.394511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.636 [2024-09-28 10:37:09.394520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:34.636 [2024-09-28 10:37:09.394530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:34.636 [2024-09-28 10:37:09.394539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.636 [2024-09-28 10:37:09.394548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:34.636 [2024-09-28 10:37:09.394558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:34.636 [2024-09-28 10:37:09.394567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.636 [2024-09-28 10:37:09.394576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:34.636 [2024-09-28 10:37:09.394583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:34.636 [2024-09-28 10:37:09.394592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.636 [2024-09-28 10:37:09.394600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:34.636 [2024-09-28 10:37:09.394606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:34.636 [2024-09-28 10:37:09.394613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.636 [2024-09-28 10:37:09.394619] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:34.636 [2024-09-28 10:37:09.394632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:34.636 [2024-09-28 10:37:09.394640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.636 [2024-09-28 10:37:09.394647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.636 [2024-09-28 10:37:09.394655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:34.636 [2024-09-28 10:37:09.394662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:34.636 [2024-09-28 10:37:09.394669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:34.636 [2024-09-28 10:37:09.394676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:34.636 [2024-09-28 10:37:09.394683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:34.636 [2024-09-28 10:37:09.394691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:34.636 [2024-09-28 10:37:09.394700] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:34.636 [2024-09-28 10:37:09.394713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.636 [2024-09-28 10:37:09.394724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:34.636 [2024-09-28 10:37:09.394732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:34.636 [2024-09-28 10:37:09.394738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:34.636 [2024-09-28 10:37:09.394746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:34.636 [2024-09-28 10:37:09.394754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:34.636 [2024-09-28 10:37:09.394762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:34.636 [2024-09-28 10:37:09.394769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:34.636 [2024-09-28 10:37:09.394776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:34.636 [2024-09-28 10:37:09.394784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:34.636 [2024-09-28 10:37:09.394792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:34.636 [2024-09-28 10:37:09.394799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:34.636 [2024-09-28 10:37:09.394806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:34.636 [2024-09-28 10:37:09.394814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:34.636 [2024-09-28 10:37:09.394822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:34.636 [2024-09-28 10:37:09.394830] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:34.636 [2024-09-28 10:37:09.394842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.636 [2024-09-28 10:37:09.394853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:34.636 [2024-09-28 10:37:09.394860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:34.636 [2024-09-28 10:37:09.394868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:34.636 [2024-09-28 10:37:09.394875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:34.636 [2024-09-28 10:37:09.394883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.636 [2024-09-28 10:37:09.394894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:34.636 [2024-09-28 10:37:09.394902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:16:34.636 [2024-09-28 10:37:09.394909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.899 [2024-09-28 10:37:09.419843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.899 [2024-09-28 10:37:09.419930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:34.900 [2024-09-28 10:37:09.419954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.820 ms 00:16:34.900 [2024-09-28 10:37:09.420012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.420279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.420314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:34.900 [2024-09-28 10:37:09.420342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:16:34.900 [2024-09-28 10:37:09.420356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.434856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.434912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:34.900 [2024-09-28 10:37:09.434923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.460 ms 00:16:34.900 [2024-09-28 10:37:09.434931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.435037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.435051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:34.900 [2024-09-28 10:37:09.435060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:34.900 [2024-09-28 10:37:09.435068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.435681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.435725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:34.900 [2024-09-28 10:37:09.435737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:16:34.900 [2024-09-28 10:37:09.435747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.435925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.435939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:34.900 [2024-09-28 10:37:09.435951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:16:34.900 [2024-09-28 10:37:09.435976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.444666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.444716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:34.900 [2024-09-28 10:37:09.444727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.664 ms 00:16:34.900 [2024-09-28 10:37:09.444742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.449071] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:34.900 [2024-09-28 10:37:09.449121] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:34.900 [2024-09-28 10:37:09.449133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.449142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:34.900 [2024-09-28 10:37:09.449151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:16:34.900 [2024-09-28 10:37:09.449159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.465608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.465656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:34.900 [2024-09-28 10:37:09.465669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.356 ms 00:16:34.900 [2024-09-28 10:37:09.465678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.469077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.469124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:34.900 [2024-09-28 10:37:09.469134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:16:34.900 [2024-09-28 10:37:09.469141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.472168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.472376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:34.900 [2024-09-28 10:37:09.472397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:16:34.900 [2024-09-28 10:37:09.472404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.472758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.472775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:34.900 [2024-09-28 10:37:09.472785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:16:34.900 [2024-09-28 10:37:09.472793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.497870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.497929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:34.900 [2024-09-28 10:37:09.497943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.052 ms 00:16:34.900 [2024-09-28 10:37:09.497951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.506023] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:34.900 [2024-09-28 10:37:09.526113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.526163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:34.900 [2024-09-28 10:37:09.526176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.038 ms 00:16:34.900 [2024-09-28 10:37:09.526185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.526282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.526293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:34.900 [2024-09-28 10:37:09.526302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:34.900 [2024-09-28 10:37:09.526313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.526371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.526380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:34.900 [2024-09-28 10:37:09.526389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:34.900 [2024-09-28 10:37:09.526397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.526421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.526430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:34.900 [2024-09-28 10:37:09.526439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:34.900 [2024-09-28 10:37:09.526447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.526488] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:34.900 [2024-09-28 10:37:09.526504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.526512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:34.900 [2024-09-28 10:37:09.526521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:34.900 [2024-09-28 10:37:09.526530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.532658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.532721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:34.900 [2024-09-28 10:37:09.532732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:16:34.900 [2024-09-28 10:37:09.532741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.532842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.900 [2024-09-28 10:37:09.532853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:34.900 [2024-09-28 10:37:09.532863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:34.900 [2024-09-28 10:37:09.532875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.900 [2024-09-28 10:37:09.533926] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:34.900 [2024-09-28 10:37:09.535397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 164.320 ms, result 0 00:16:34.900 [2024-09-28 10:37:09.536426] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:34.900 [2024-09-28 10:37:09.544037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:51.466  Copying: 17/256 [MB] (17 MBps) Copying: 29/256 [MB] (12 MBps) Copying: 47/256 [MB] (17 MBps) Copying: 66/256 [MB] (19 MBps) Copying: 86/256 [MB] (19 MBps) Copying: 108/256 [MB] (22 MBps) Copying: 129/256 [MB] (21 MBps) Copying: 153/256 [MB] (23 MBps) Copying: 166/256 [MB] (12 MBps) Copying: 177/256 [MB] (11 MBps) Copying: 188/256 [MB] (10 MBps) Copying: 198/256 [MB] (10 MBps) Copying: 211/256 [MB] (12 MBps) Copying: 221/256 [MB] (10 MBps) Copying: 232/256 [MB] (10 MBps) Copying: 249/256 [MB] (17 MBps) Copying: 256/256 [MB] (average 15 MBps)[2024-09-28 10:37:26.099323] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:51.466 [2024-09-28 10:37:26.102082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.102133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:51.466 [2024-09-28 10:37:26.102153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:51.466 [2024-09-28 10:37:26.102164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.102192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:51.466 [2024-09-28 10:37:26.103491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.103682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:51.466 [2024-09-28 10:37:26.103906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:16:51.466 [2024-09-28 10:37:26.103977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.104822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.104861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:51.466 [2024-09-28 10:37:26.104875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:16:51.466 [2024-09-28 10:37:26.104885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.109019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.109040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:51.466 [2024-09-28 10:37:26.109052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.105 ms 00:16:51.466 [2024-09-28 10:37:26.109060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.116415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.116452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:51.466 [2024-09-28 10:37:26.116463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.327 ms 00:16:51.466 [2024-09-28 10:37:26.116471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.119696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.119906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:51.466 [2024-09-28 10:37:26.119927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.137 ms 00:16:51.466 [2024-09-28 10:37:26.119937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.125830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.126040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:51.466 [2024-09-28 10:37:26.126262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.702 ms 00:16:51.466 [2024-09-28 10:37:26.126318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.126580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.126651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:51.466 [2024-09-28 10:37:26.126759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:16:51.466 [2024-09-28 10:37:26.126783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.130578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.130752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:51.466 [2024-09-28 10:37:26.130809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.751 ms 00:16:51.466 [2024-09-28 10:37:26.130830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.134037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.134193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:51.466 [2024-09-28 10:37:26.134248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:16:51.466 [2024-09-28 10:37:26.134271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.136939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.137110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:51.466 [2024-09-28 10:37:26.137165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:16:51.466 [2024-09-28 10:37:26.137186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.139631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.466 [2024-09-28 10:37:26.139832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:51.466 [2024-09-28 10:37:26.139887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.341 ms 00:16:51.466 [2024-09-28 10:37:26.139918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.466 [2024-09-28 10:37:26.140051] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:51.466 [2024-09-28 10:37:26.140088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:51.466 [2024-09-28 10:37:26.140563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.140590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.140622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.140651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.140680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.140708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.140806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.140836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.140865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.140893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.140920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.141982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:51.467 [2024-09-28 10:37:26.142823] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:51.467 [2024-09-28 10:37:26.142832] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1563e554-b486-4792-9f1b-1b87095bfb08 00:16:51.467 [2024-09-28 10:37:26.142841] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:51.467 [2024-09-28 10:37:26.142848] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:51.467 [2024-09-28 10:37:26.142865] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:51.468 [2024-09-28 10:37:26.142874] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:51.468 [2024-09-28 10:37:26.142881] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:51.468 [2024-09-28 10:37:26.142899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:51.468 [2024-09-28 10:37:26.142907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:51.468 [2024-09-28 10:37:26.142913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:51.468 [2024-09-28 10:37:26.142920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:51.468 [2024-09-28 10:37:26.142932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.468 [2024-09-28 10:37:26.142944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:51.468 [2024-09-28 10:37:26.142954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.882 ms 00:16:51.468 [2024-09-28 10:37:26.142982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.146208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.468 [2024-09-28 10:37:26.146377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:51.468 [2024-09-28 10:37:26.146395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:16:51.468 [2024-09-28 10:37:26.146404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.146569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.468 [2024-09-28 10:37:26.146579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:51.468 [2024-09-28 10:37:26.146588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:16:51.468 [2024-09-28 10:37:26.146596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.156905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.156940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:51.468 [2024-09-28 10:37:26.156952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.157013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.157107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.157129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:51.468 [2024-09-28 10:37:26.157139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.157147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.157196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.157212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:51.468 [2024-09-28 10:37:26.157221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.157229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.157252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.157264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:51.468 [2024-09-28 10:37:26.157273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.157282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.176334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.176390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:51.468 [2024-09-28 10:37:26.176403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.176413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.191066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.191116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:51.468 [2024-09-28 10:37:26.191128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.191138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.191204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.191215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:51.468 [2024-09-28 10:37:26.191224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.191234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.191279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.191290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:51.468 [2024-09-28 10:37:26.191317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.191326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.191422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.191435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:51.468 [2024-09-28 10:37:26.191444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.191453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.191489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.191499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:51.468 [2024-09-28 10:37:26.191507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.191519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.191570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.191580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:51.468 [2024-09-28 10:37:26.191594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.191604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.191664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.468 [2024-09-28 10:37:26.191678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:51.468 [2024-09-28 10:37:26.191691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.468 [2024-09-28 10:37:26.191702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.468 [2024-09-28 10:37:26.191889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 89.785 ms, result 0 00:16:51.727 00:16:51.727 00:16:51.988 10:37:26 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:52.561 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:16:52.561 10:37:27 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:52.561 10:37:27 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:16:52.561 10:37:27 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:52.561 10:37:27 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:52.561 10:37:27 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:16:52.561 10:37:27 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:52.561 10:37:27 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 86751 00:16:52.561 10:37:27 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86751 ']' 00:16:52.561 10:37:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86751 00:16:52.561 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86751) - No such process 00:16:52.561 Process with pid 86751 is not found 00:16:52.561 10:37:27 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 86751 is not found' 00:16:52.561 00:16:52.561 real 1m4.797s 00:16:52.561 user 1m27.612s 00:16:52.561 sys 0m4.900s 00:16:52.561 10:37:27 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.561 10:37:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:52.561 ************************************ 00:16:52.561 END TEST ftl_trim 00:16:52.561 ************************************ 00:16:52.561 10:37:27 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:16:52.561 10:37:27 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:52.561 10:37:27 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.561 10:37:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:52.561 ************************************ 00:16:52.561 START TEST ftl_restore 00:16:52.561 ************************************ 00:16:52.561 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:16:52.561 * Looking for test storage... 00:16:52.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.561 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:52.561 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:52.561 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.823 10:37:27 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:52.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.823 --rc genhtml_branch_coverage=1 00:16:52.823 --rc genhtml_function_coverage=1 00:16:52.823 --rc genhtml_legend=1 00:16:52.823 --rc geninfo_all_blocks=1 00:16:52.823 --rc geninfo_unexecuted_blocks=1 00:16:52.823 00:16:52.823 ' 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:52.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.823 --rc genhtml_branch_coverage=1 00:16:52.823 --rc genhtml_function_coverage=1 00:16:52.823 --rc genhtml_legend=1 00:16:52.823 --rc geninfo_all_blocks=1 00:16:52.823 --rc geninfo_unexecuted_blocks=1 00:16:52.823 00:16:52.823 ' 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:52.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.823 --rc genhtml_branch_coverage=1 00:16:52.823 --rc genhtml_function_coverage=1 00:16:52.823 --rc genhtml_legend=1 00:16:52.823 --rc geninfo_all_blocks=1 00:16:52.823 --rc geninfo_unexecuted_blocks=1 00:16:52.823 00:16:52.823 ' 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:52.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.823 --rc genhtml_branch_coverage=1 00:16:52.823 --rc genhtml_function_coverage=1 00:16:52.823 --rc genhtml_legend=1 00:16:52.823 --rc geninfo_all_blocks=1 00:16:52.823 --rc geninfo_unexecuted_blocks=1 00:16:52.823 00:16:52.823 ' 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.oreTn92g08 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=87050 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 87050 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 87050 ']' 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.823 10:37:27 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:16:52.823 10:37:27 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.823 [2024-09-28 10:37:27.503456] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:52.823 [2024-09-28 10:37:27.503583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87050 ] 00:16:53.085 [2024-09-28 10:37:27.632931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:53.085 [2024-09-28 10:37:27.655349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.085 [2024-09-28 10:37:27.701059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.656 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.656 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:16:53.656 10:37:28 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:53.656 10:37:28 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:16:53.656 10:37:28 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:53.656 10:37:28 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:16:53.657 10:37:28 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:16:53.657 10:37:28 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:53.917 10:37:28 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:53.917 10:37:28 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:16:53.917 10:37:28 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:53.917 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:53.917 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:53.917 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:53.917 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:53.917 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:54.178 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:54.178 { 00:16:54.178 "name": "nvme0n1", 00:16:54.178 "aliases": [ 00:16:54.178 "b99a4f71-215e-4774-a557-2807d899741e" 00:16:54.178 ], 00:16:54.178 "product_name": "NVMe disk", 00:16:54.178 "block_size": 4096, 00:16:54.178 "num_blocks": 1310720, 00:16:54.178 "uuid": "b99a4f71-215e-4774-a557-2807d899741e", 00:16:54.178 "numa_id": -1, 00:16:54.178 "assigned_rate_limits": { 00:16:54.178 "rw_ios_per_sec": 0, 00:16:54.178 "rw_mbytes_per_sec": 0, 00:16:54.178 "r_mbytes_per_sec": 0, 00:16:54.178 "w_mbytes_per_sec": 0 00:16:54.178 }, 00:16:54.178 "claimed": true, 00:16:54.178 "claim_type": "read_many_write_one", 00:16:54.178 "zoned": false, 00:16:54.178 "supported_io_types": { 00:16:54.178 "read": true, 00:16:54.178 "write": true, 00:16:54.178 "unmap": true, 00:16:54.178 "flush": true, 00:16:54.178 "reset": true, 00:16:54.178 "nvme_admin": true, 00:16:54.178 "nvme_io": true, 00:16:54.178 "nvme_io_md": false, 00:16:54.178 "write_zeroes": true, 00:16:54.178 "zcopy": false, 00:16:54.178 "get_zone_info": false, 00:16:54.178 "zone_management": false, 00:16:54.178 "zone_append": false, 00:16:54.178 "compare": true, 00:16:54.178 "compare_and_write": false, 00:16:54.178 "abort": true, 00:16:54.178 "seek_hole": false, 00:16:54.178 "seek_data": false, 00:16:54.178 "copy": true, 00:16:54.178 "nvme_iov_md": false 00:16:54.178 }, 00:16:54.178 "driver_specific": { 00:16:54.178 "nvme": [ 00:16:54.178 { 00:16:54.178 "pci_address": "0000:00:11.0", 00:16:54.178 "trid": { 00:16:54.178 "trtype": "PCIe", 00:16:54.178 "traddr": "0000:00:11.0" 00:16:54.178 }, 00:16:54.178 "ctrlr_data": { 00:16:54.178 "cntlid": 0, 00:16:54.178 "vendor_id": "0x1b36", 00:16:54.178 "model_number": "QEMU NVMe Ctrl", 00:16:54.178 "serial_number": "12341", 00:16:54.178 "firmware_revision": "8.0.0", 00:16:54.178 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:54.178 "oacs": { 00:16:54.178 "security": 0, 00:16:54.178 "format": 1, 00:16:54.178 "firmware": 0, 00:16:54.178 "ns_manage": 1 00:16:54.178 }, 00:16:54.178 "multi_ctrlr": false, 00:16:54.178 "ana_reporting": false 00:16:54.178 }, 00:16:54.178 "vs": { 00:16:54.178 "nvme_version": "1.4" 00:16:54.178 }, 00:16:54.178 "ns_data": { 00:16:54.178 "id": 1, 00:16:54.178 "can_share": false 00:16:54.178 } 00:16:54.178 } 00:16:54.178 ], 00:16:54.178 "mp_policy": "active_passive" 00:16:54.178 } 00:16:54.178 } 00:16:54.178 ]' 00:16:54.178 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:54.178 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:54.178 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:54.178 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:54.178 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:54.178 10:37:28 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:16:54.178 10:37:28 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:16:54.178 10:37:28 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:54.178 10:37:28 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:16:54.178 10:37:28 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:54.178 10:37:28 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:54.440 10:37:29 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a9484171-b294-4ed4-a768-f4d5c2ae4f8c 00:16:54.440 10:37:29 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:16:54.440 10:37:29 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a9484171-b294-4ed4-a768-f4d5c2ae4f8c 00:16:54.701 10:37:29 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:54.961 10:37:29 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=1a4d33cd-53c4-458f-b073-158522f05e8c 00:16:54.961 10:37:29 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1a4d33cd-53c4-458f-b073-158522f05e8c 00:16:55.222 10:37:29 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:55.222 10:37:29 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:16:55.222 10:37:29 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:55.222 10:37:29 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:16:55.222 10:37:29 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:55.222 10:37:29 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:55.222 10:37:29 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:16:55.222 10:37:29 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:55.222 10:37:29 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:55.222 10:37:29 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:55.222 10:37:29 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:55.222 10:37:29 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:55.222 10:37:29 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:55.222 10:37:29 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:55.222 { 00:16:55.222 "name": "1aa47607-38f3-4299-afcd-5bddcca969a9", 00:16:55.222 "aliases": [ 00:16:55.222 "lvs/nvme0n1p0" 00:16:55.222 ], 00:16:55.222 "product_name": "Logical Volume", 00:16:55.222 "block_size": 4096, 00:16:55.222 "num_blocks": 26476544, 00:16:55.222 "uuid": "1aa47607-38f3-4299-afcd-5bddcca969a9", 00:16:55.222 "assigned_rate_limits": { 00:16:55.222 "rw_ios_per_sec": 0, 00:16:55.222 "rw_mbytes_per_sec": 0, 00:16:55.222 "r_mbytes_per_sec": 0, 00:16:55.222 "w_mbytes_per_sec": 0 00:16:55.222 }, 00:16:55.222 "claimed": false, 00:16:55.222 "zoned": false, 00:16:55.222 "supported_io_types": { 00:16:55.222 "read": true, 00:16:55.222 "write": true, 00:16:55.222 "unmap": true, 00:16:55.222 "flush": false, 00:16:55.222 "reset": true, 00:16:55.222 "nvme_admin": false, 00:16:55.222 "nvme_io": false, 00:16:55.222 "nvme_io_md": false, 00:16:55.222 "write_zeroes": true, 00:16:55.222 "zcopy": false, 00:16:55.222 "get_zone_info": false, 00:16:55.222 "zone_management": false, 00:16:55.222 "zone_append": false, 00:16:55.222 "compare": false, 00:16:55.222 "compare_and_write": false, 00:16:55.222 "abort": false, 00:16:55.222 "seek_hole": true, 00:16:55.222 "seek_data": true, 00:16:55.222 "copy": false, 00:16:55.222 "nvme_iov_md": false 00:16:55.222 }, 00:16:55.222 "driver_specific": { 00:16:55.222 "lvol": { 00:16:55.222 "lvol_store_uuid": "1a4d33cd-53c4-458f-b073-158522f05e8c", 00:16:55.222 "base_bdev": "nvme0n1", 00:16:55.222 "thin_provision": true, 00:16:55.222 "num_allocated_clusters": 0, 00:16:55.222 "snapshot": false, 00:16:55.222 "clone": false, 00:16:55.222 "esnap_clone": false 00:16:55.222 } 00:16:55.222 } 00:16:55.222 } 00:16:55.222 ]' 00:16:55.482 10:37:29 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:55.482 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:55.482 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:55.482 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:55.482 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:55.482 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:16:55.482 10:37:30 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:16:55.482 10:37:30 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:16:55.482 10:37:30 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:55.744 10:37:30 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:55.744 10:37:30 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:55.744 10:37:30 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:55.744 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:55.744 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:55.744 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:55.744 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:55.744 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:56.005 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:56.005 { 00:16:56.005 "name": "1aa47607-38f3-4299-afcd-5bddcca969a9", 00:16:56.005 "aliases": [ 00:16:56.005 "lvs/nvme0n1p0" 00:16:56.005 ], 00:16:56.005 "product_name": "Logical Volume", 00:16:56.005 "block_size": 4096, 00:16:56.005 "num_blocks": 26476544, 00:16:56.005 "uuid": "1aa47607-38f3-4299-afcd-5bddcca969a9", 00:16:56.005 "assigned_rate_limits": { 00:16:56.005 "rw_ios_per_sec": 0, 00:16:56.005 "rw_mbytes_per_sec": 0, 00:16:56.005 "r_mbytes_per_sec": 0, 00:16:56.005 "w_mbytes_per_sec": 0 00:16:56.005 }, 00:16:56.005 "claimed": false, 00:16:56.005 "zoned": false, 00:16:56.005 "supported_io_types": { 00:16:56.005 "read": true, 00:16:56.005 "write": true, 00:16:56.005 "unmap": true, 00:16:56.005 "flush": false, 00:16:56.005 "reset": true, 00:16:56.005 "nvme_admin": false, 00:16:56.005 "nvme_io": false, 00:16:56.005 "nvme_io_md": false, 00:16:56.005 "write_zeroes": true, 00:16:56.005 "zcopy": false, 00:16:56.005 "get_zone_info": false, 00:16:56.005 "zone_management": false, 00:16:56.005 "zone_append": false, 00:16:56.005 "compare": false, 00:16:56.005 "compare_and_write": false, 00:16:56.005 "abort": false, 00:16:56.005 "seek_hole": true, 00:16:56.005 "seek_data": true, 00:16:56.005 "copy": false, 00:16:56.005 "nvme_iov_md": false 00:16:56.005 }, 00:16:56.005 "driver_specific": { 00:16:56.005 "lvol": { 00:16:56.005 "lvol_store_uuid": "1a4d33cd-53c4-458f-b073-158522f05e8c", 00:16:56.005 "base_bdev": "nvme0n1", 00:16:56.005 "thin_provision": true, 00:16:56.005 "num_allocated_clusters": 0, 00:16:56.005 "snapshot": false, 00:16:56.005 "clone": false, 00:16:56.005 "esnap_clone": false 00:16:56.005 } 00:16:56.005 } 00:16:56.005 } 00:16:56.005 ]' 00:16:56.005 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:56.005 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:56.005 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:56.005 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:56.005 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:56.005 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:16:56.005 10:37:30 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:16:56.005 10:37:30 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:56.266 10:37:30 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:16:56.266 10:37:30 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:56.266 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:56.266 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:56.266 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:56.266 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:56.266 10:37:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1aa47607-38f3-4299-afcd-5bddcca969a9 00:16:56.528 10:37:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:56.528 { 00:16:56.528 "name": "1aa47607-38f3-4299-afcd-5bddcca969a9", 00:16:56.528 "aliases": [ 00:16:56.528 "lvs/nvme0n1p0" 00:16:56.528 ], 00:16:56.528 "product_name": "Logical Volume", 00:16:56.528 "block_size": 4096, 00:16:56.528 "num_blocks": 26476544, 00:16:56.528 "uuid": "1aa47607-38f3-4299-afcd-5bddcca969a9", 00:16:56.528 "assigned_rate_limits": { 00:16:56.528 "rw_ios_per_sec": 0, 00:16:56.528 "rw_mbytes_per_sec": 0, 00:16:56.528 "r_mbytes_per_sec": 0, 00:16:56.528 "w_mbytes_per_sec": 0 00:16:56.528 }, 00:16:56.528 "claimed": false, 00:16:56.528 "zoned": false, 00:16:56.528 "supported_io_types": { 00:16:56.528 "read": true, 00:16:56.528 "write": true, 00:16:56.528 "unmap": true, 00:16:56.528 "flush": false, 00:16:56.528 "reset": true, 00:16:56.528 "nvme_admin": false, 00:16:56.528 "nvme_io": false, 00:16:56.528 "nvme_io_md": false, 00:16:56.528 "write_zeroes": true, 00:16:56.528 "zcopy": false, 00:16:56.528 "get_zone_info": false, 00:16:56.528 "zone_management": false, 00:16:56.528 "zone_append": false, 00:16:56.528 "compare": false, 00:16:56.528 "compare_and_write": false, 00:16:56.528 "abort": false, 00:16:56.528 "seek_hole": true, 00:16:56.528 "seek_data": true, 00:16:56.528 "copy": false, 00:16:56.528 "nvme_iov_md": false 00:16:56.528 }, 00:16:56.528 "driver_specific": { 00:16:56.528 "lvol": { 00:16:56.528 "lvol_store_uuid": "1a4d33cd-53c4-458f-b073-158522f05e8c", 00:16:56.528 "base_bdev": "nvme0n1", 00:16:56.528 "thin_provision": true, 00:16:56.528 "num_allocated_clusters": 0, 00:16:56.528 "snapshot": false, 00:16:56.528 "clone": false, 00:16:56.528 "esnap_clone": false 00:16:56.528 } 00:16:56.528 } 00:16:56.528 } 00:16:56.528 ]' 00:16:56.528 10:37:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:56.528 10:37:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:56.528 10:37:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:56.528 10:37:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:56.528 10:37:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:56.528 10:37:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:16:56.528 10:37:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:16:56.528 10:37:31 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1aa47607-38f3-4299-afcd-5bddcca969a9 --l2p_dram_limit 10' 00:16:56.528 10:37:31 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:16:56.528 10:37:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:56.528 10:37:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:16:56.528 10:37:31 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:16:56.528 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:16:56.528 10:37:31 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1aa47607-38f3-4299-afcd-5bddcca969a9 --l2p_dram_limit 10 -c nvc0n1p0 00:16:56.528 [2024-09-28 10:37:31.288697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.288742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:56.528 [2024-09-28 10:37:31.288756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.528 [2024-09-28 10:37:31.288763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-09-28 10:37:31.288805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.288817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.528 [2024-09-28 10:37:31.288826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:56.528 [2024-09-28 10:37:31.288834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-09-28 10:37:31.288854] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:56.528 [2024-09-28 10:37:31.289058] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:56.528 [2024-09-28 10:37:31.289077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.289086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.528 [2024-09-28 10:37:31.289097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:16:56.528 [2024-09-28 10:37:31.289107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-09-28 10:37:31.289133] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e5266b08-7571-4e39-9a75-9919ffbc7afd 00:16:56.528 [2024-09-28 10:37:31.290434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.290576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:56.528 [2024-09-28 10:37:31.290590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:56.528 [2024-09-28 10:37:31.290600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-09-28 10:37:31.297639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.297755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.528 [2024-09-28 10:37:31.297768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.992 ms 00:16:56.528 [2024-09-28 10:37:31.297778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-09-28 10:37:31.297880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.297892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.528 [2024-09-28 10:37:31.297899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:56.528 [2024-09-28 10:37:31.297907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-09-28 10:37:31.297941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.297950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:56.528 [2024-09-28 10:37:31.297956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:56.528 [2024-09-28 10:37:31.297979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-09-28 10:37:31.298001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:56.528 [2024-09-28 10:37:31.299678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.299704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.528 [2024-09-28 10:37:31.299713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:16:56.528 [2024-09-28 10:37:31.299722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-09-28 10:37:31.299751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.299758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:56.528 [2024-09-28 10:37:31.299768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:56.528 [2024-09-28 10:37:31.299774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-09-28 10:37:31.299794] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:56.528 [2024-09-28 10:37:31.299908] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:56.528 [2024-09-28 10:37:31.299920] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:56.528 [2024-09-28 10:37:31.299929] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:56.528 [2024-09-28 10:37:31.299942] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:56.528 [2024-09-28 10:37:31.299955] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:56.528 [2024-09-28 10:37:31.299979] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:56.528 [2024-09-28 10:37:31.299986] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:56.528 [2024-09-28 10:37:31.299995] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:56.528 [2024-09-28 10:37:31.300001] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:56.528 [2024-09-28 10:37:31.300009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.300018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:56.528 [2024-09-28 10:37:31.300026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:16:56.528 [2024-09-28 10:37:31.300032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-09-28 10:37:31.300101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-09-28 10:37:31.300108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:56.528 [2024-09-28 10:37:31.300116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:56.529 [2024-09-28 10:37:31.300121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.529 [2024-09-28 10:37:31.300198] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:56.529 [2024-09-28 10:37:31.300207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:56.529 [2024-09-28 10:37:31.300215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.529 [2024-09-28 10:37:31.300221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:56.529 [2024-09-28 10:37:31.300235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:56.529 [2024-09-28 10:37:31.300249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:56.529 [2024-09-28 10:37:31.300256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.529 [2024-09-28 10:37:31.300268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:56.529 [2024-09-28 10:37:31.300273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:56.529 [2024-09-28 10:37:31.300281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.529 [2024-09-28 10:37:31.300286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:56.529 [2024-09-28 10:37:31.300293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:56.529 [2024-09-28 10:37:31.300298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:56.529 [2024-09-28 10:37:31.300311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:56.529 [2024-09-28 10:37:31.300318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:56.529 [2024-09-28 10:37:31.300335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.529 [2024-09-28 10:37:31.300349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:56.529 [2024-09-28 10:37:31.300355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.529 [2024-09-28 10:37:31.300368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:56.529 [2024-09-28 10:37:31.300376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.529 [2024-09-28 10:37:31.300391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:56.529 [2024-09-28 10:37:31.300397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.529 [2024-09-28 10:37:31.300410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:56.529 [2024-09-28 10:37:31.300418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.529 [2024-09-28 10:37:31.300433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:56.529 [2024-09-28 10:37:31.300438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:56.529 [2024-09-28 10:37:31.300446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.529 [2024-09-28 10:37:31.300452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:56.529 [2024-09-28 10:37:31.300460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:56.529 [2024-09-28 10:37:31.300466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:56.529 [2024-09-28 10:37:31.300479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:56.529 [2024-09-28 10:37:31.300486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300491] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:56.529 [2024-09-28 10:37:31.300501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:56.529 [2024-09-28 10:37:31.300508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.529 [2024-09-28 10:37:31.300515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.529 [2024-09-28 10:37:31.300521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:56.529 [2024-09-28 10:37:31.300529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:56.529 [2024-09-28 10:37:31.300534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:56.529 [2024-09-28 10:37:31.300542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:56.529 [2024-09-28 10:37:31.300547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:56.529 [2024-09-28 10:37:31.300555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:56.529 [2024-09-28 10:37:31.300565] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:56.529 [2024-09-28 10:37:31.300577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.529 [2024-09-28 10:37:31.300585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:56.529 [2024-09-28 10:37:31.300594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:56.529 [2024-09-28 10:37:31.300601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:56.529 [2024-09-28 10:37:31.300609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:56.529 [2024-09-28 10:37:31.300615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:56.529 [2024-09-28 10:37:31.300625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:56.529 [2024-09-28 10:37:31.300631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:56.529 [2024-09-28 10:37:31.300639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:56.529 [2024-09-28 10:37:31.300646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:56.529 [2024-09-28 10:37:31.300654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:56.529 [2024-09-28 10:37:31.300660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:56.529 [2024-09-28 10:37:31.300670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:56.529 [2024-09-28 10:37:31.300676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:56.529 [2024-09-28 10:37:31.300685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:56.529 [2024-09-28 10:37:31.300691] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:56.529 [2024-09-28 10:37:31.300701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.529 [2024-09-28 10:37:31.300710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:56.529 [2024-09-28 10:37:31.300717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:56.529 [2024-09-28 10:37:31.300723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:56.529 [2024-09-28 10:37:31.300731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:56.529 [2024-09-28 10:37:31.300737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.529 [2024-09-28 10:37:31.300746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:56.529 [2024-09-28 10:37:31.300752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:16:56.529 [2024-09-28 10:37:31.300759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.529 [2024-09-28 10:37:31.300790] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:56.529 [2024-09-28 10:37:31.300798] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:00.741 [2024-09-28 10:37:34.753987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.754141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:00.741 [2024-09-28 10:37:34.754157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3453.187 ms 00:17:00.741 [2024-09-28 10:37:34.754167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.766100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.766133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.741 [2024-09-28 10:37:34.766142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.853 ms 00:17:00.741 [2024-09-28 10:37:34.766152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.766222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.766233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:00.741 [2024-09-28 10:37:34.766240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:00.741 [2024-09-28 10:37:34.766247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.776226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.776259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.741 [2024-09-28 10:37:34.776267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.930 ms 00:17:00.741 [2024-09-28 10:37:34.776281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.776304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.776316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.741 [2024-09-28 10:37:34.776323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:00.741 [2024-09-28 10:37:34.776331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.776763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.776781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.741 [2024-09-28 10:37:34.776789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:17:00.741 [2024-09-28 10:37:34.776799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.776881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.776892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.741 [2024-09-28 10:37:34.776900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:00.741 [2024-09-28 10:37:34.776909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.795118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.795182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.741 [2024-09-28 10:37:34.795210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.187 ms 00:17:00.741 [2024-09-28 10:37:34.795228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.804484] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:00.741 [2024-09-28 10:37:34.807596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.807622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:00.741 [2024-09-28 10:37:34.807633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.210 ms 00:17:00.741 [2024-09-28 10:37:34.807639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.875777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.875807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:00.741 [2024-09-28 10:37:34.875822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.115 ms 00:17:00.741 [2024-09-28 10:37:34.875829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.875986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.875995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:00.741 [2024-09-28 10:37:34.876005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:00.741 [2024-09-28 10:37:34.876011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.879582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.879609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:00.741 [2024-09-28 10:37:34.879619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:17:00.741 [2024-09-28 10:37:34.879628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.882728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.882869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:00.741 [2024-09-28 10:37:34.882885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.070 ms 00:17:00.741 [2024-09-28 10:37:34.882891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.883134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.883187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:00.741 [2024-09-28 10:37:34.883198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:00.741 [2024-09-28 10:37:34.883208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.914649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.914678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:00.741 [2024-09-28 10:37:34.914688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.420 ms 00:17:00.741 [2024-09-28 10:37:34.914697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.919423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.919448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:00.741 [2024-09-28 10:37:34.919458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.698 ms 00:17:00.741 [2024-09-28 10:37:34.919464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.922770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.922795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:00.741 [2024-09-28 10:37:34.922804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:17:00.741 [2024-09-28 10:37:34.922809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.927013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.927044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:00.741 [2024-09-28 10:37:34.927058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.184 ms 00:17:00.741 [2024-09-28 10:37:34.927064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.927087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.927095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:00.741 [2024-09-28 10:37:34.927103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:00.741 [2024-09-28 10:37:34.927110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.927165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.741 [2024-09-28 10:37:34.927172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:00.741 [2024-09-28 10:37:34.927180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:00.741 [2024-09-28 10:37:34.927186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.741 [2024-09-28 10:37:34.928019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3638.947 ms, result 0 00:17:00.741 { 00:17:00.741 "name": "ftl0", 00:17:00.741 "uuid": "e5266b08-7571-4e39-9a75-9919ffbc7afd" 00:17:00.741 } 00:17:00.741 10:37:34 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:00.741 10:37:34 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:00.741 10:37:35 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:00.741 10:37:35 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:00.742 [2024-09-28 10:37:35.341271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.341303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:00.742 [2024-09-28 10:37:35.341312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:00.742 [2024-09-28 10:37:35.341321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.341339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:00.742 [2024-09-28 10:37:35.341870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.341883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:00.742 [2024-09-28 10:37:35.341895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:17:00.742 [2024-09-28 10:37:35.341901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.342124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.342134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:00.742 [2024-09-28 10:37:35.342143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:17:00.742 [2024-09-28 10:37:35.342155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.344568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.344584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:00.742 [2024-09-28 10:37:35.344594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:17:00.742 [2024-09-28 10:37:35.344601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.349191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.349210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:00.742 [2024-09-28 10:37:35.349219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:17:00.742 [2024-09-28 10:37:35.349226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.351594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.351619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:00.742 [2024-09-28 10:37:35.351628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:17:00.742 [2024-09-28 10:37:35.351634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.356758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.356785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:00.742 [2024-09-28 10:37:35.356801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.095 ms 00:17:00.742 [2024-09-28 10:37:35.356807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.356899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.356906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:00.742 [2024-09-28 10:37:35.356918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:00.742 [2024-09-28 10:37:35.356926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.359243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.359358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:00.742 [2024-09-28 10:37:35.359374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.296 ms 00:17:00.742 [2024-09-28 10:37:35.359380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.361641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.361674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:00.742 [2024-09-28 10:37:35.361685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:17:00.742 [2024-09-28 10:37:35.361690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.363363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.363389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:00.742 [2024-09-28 10:37:35.363398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:17:00.742 [2024-09-28 10:37:35.363404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.365118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.742 [2024-09-28 10:37:35.365141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:00.742 [2024-09-28 10:37:35.365151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.665 ms 00:17:00.742 [2024-09-28 10:37:35.365157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.742 [2024-09-28 10:37:35.365253] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:00.742 [2024-09-28 10:37:35.365265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:00.742 [2024-09-28 10:37:35.365538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.365995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.366001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.366007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.366014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.366020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.366029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.366036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.366043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.366050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.366059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:00.743 [2024-09-28 10:37:35.366071] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:00.743 [2024-09-28 10:37:35.366080] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5266b08-7571-4e39-9a75-9919ffbc7afd 00:17:00.743 [2024-09-28 10:37:35.366086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:00.743 [2024-09-28 10:37:35.366093] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:00.743 [2024-09-28 10:37:35.366099] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:00.743 [2024-09-28 10:37:35.366106] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:00.743 [2024-09-28 10:37:35.366112] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:00.743 [2024-09-28 10:37:35.366120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:00.743 [2024-09-28 10:37:35.366127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:00.743 [2024-09-28 10:37:35.366134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:00.743 [2024-09-28 10:37:35.366139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:00.743 [2024-09-28 10:37:35.366145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.743 [2024-09-28 10:37:35.366151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:00.743 [2024-09-28 10:37:35.366159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:17:00.743 [2024-09-28 10:37:35.366165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.743 [2024-09-28 10:37:35.367420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.743 [2024-09-28 10:37:35.367459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:00.743 [2024-09-28 10:37:35.367477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:17:00.743 [2024-09-28 10:37:35.367492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.743 [2024-09-28 10:37:35.367570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.743 [2024-09-28 10:37:35.367654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:00.744 [2024-09-28 10:37:35.367676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:00.744 [2024-09-28 10:37:35.367691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.373559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.373659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.744 [2024-09-28 10:37:35.373701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.373722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.373778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.373799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.744 [2024-09-28 10:37:35.373817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.373831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.373902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.374242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.744 [2024-09-28 10:37:35.374281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.374299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.374341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.374359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.744 [2024-09-28 10:37:35.374376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.374453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.384957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.385078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.744 [2024-09-28 10:37:35.385119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.385139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.394099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.394209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.744 [2024-09-28 10:37:35.394251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.394269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.394352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.394373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.744 [2024-09-28 10:37:35.394395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.394409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.394450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.394514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.744 [2024-09-28 10:37:35.394535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.394550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.394633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.394653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.744 [2024-09-28 10:37:35.394673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.394688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.394769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.394790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:00.744 [2024-09-28 10:37:35.394810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.394824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.394876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.394927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.744 [2024-09-28 10:37:35.394947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.394971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.395030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.744 [2024-09-28 10:37:35.395050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.744 [2024-09-28 10:37:35.395095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.744 [2024-09-28 10:37:35.395112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.744 [2024-09-28 10:37:35.395259] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.934 ms, result 0 00:17:00.744 true 00:17:00.744 10:37:35 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 87050 00:17:00.744 10:37:35 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 87050 ']' 00:17:00.744 10:37:35 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 87050 00:17:00.744 10:37:35 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:00.744 10:37:35 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.744 10:37:35 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87050 00:17:00.744 killing process with pid 87050 00:17:00.744 10:37:35 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.744 10:37:35 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.744 10:37:35 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87050' 00:17:00.744 10:37:35 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 87050 00:17:00.744 10:37:35 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 87050 00:17:06.041 10:37:40 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:10.234 262144+0 records in 00:17:10.234 262144+0 records out 00:17:10.234 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.6407 s, 295 MB/s 00:17:10.234 10:37:44 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:11.610 10:37:45 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:11.610 [2024-09-28 10:37:46.032202] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:11.610 [2024-09-28 10:37:46.032321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87262 ] 00:17:11.610 [2024-09-28 10:37:46.161883] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:11.610 [2024-09-28 10:37:46.184119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.610 [2024-09-28 10:37:46.248940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.610 [2024-09-28 10:37:46.367049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:11.610 [2024-09-28 10:37:46.367327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:11.873 [2024-09-28 10:37:46.524721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.873 [2024-09-28 10:37:46.524774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:11.873 [2024-09-28 10:37:46.524790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:11.873 [2024-09-28 10:37:46.524803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.873 [2024-09-28 10:37:46.524863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.873 [2024-09-28 10:37:46.524873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:11.873 [2024-09-28 10:37:46.524882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:11.873 [2024-09-28 10:37:46.524890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.873 [2024-09-28 10:37:46.524917] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:11.873 [2024-09-28 10:37:46.525214] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:11.873 [2024-09-28 10:37:46.525233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.873 [2024-09-28 10:37:46.525245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:11.873 [2024-09-28 10:37:46.525254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:17:11.873 [2024-09-28 10:37:46.525265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.873 [2024-09-28 10:37:46.527040] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:11.873 [2024-09-28 10:37:46.531235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.873 [2024-09-28 10:37:46.531291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:11.873 [2024-09-28 10:37:46.531310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.196 ms 00:17:11.873 [2024-09-28 10:37:46.531326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.873 [2024-09-28 10:37:46.531402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.873 [2024-09-28 10:37:46.531419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:11.873 [2024-09-28 10:37:46.531428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:11.873 [2024-09-28 10:37:46.531436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.873 [2024-09-28 10:37:46.540977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.873 [2024-09-28 10:37:46.541011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:11.873 [2024-09-28 10:37:46.541022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.494 ms 00:17:11.873 [2024-09-28 10:37:46.541034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.873 [2024-09-28 10:37:46.541120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.873 [2024-09-28 10:37:46.541130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:11.873 [2024-09-28 10:37:46.541139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:11.873 [2024-09-28 10:37:46.541147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.873 [2024-09-28 10:37:46.541206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.873 [2024-09-28 10:37:46.541218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:11.873 [2024-09-28 10:37:46.541228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:11.873 [2024-09-28 10:37:46.541237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.873 [2024-09-28 10:37:46.541264] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:11.873 [2024-09-28 10:37:46.543547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.873 [2024-09-28 10:37:46.543755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:11.873 [2024-09-28 10:37:46.543782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:17:11.874 [2024-09-28 10:37:46.543790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.874 [2024-09-28 10:37:46.543828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.874 [2024-09-28 10:37:46.543837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:11.874 [2024-09-28 10:37:46.543845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:11.874 [2024-09-28 10:37:46.543854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.874 [2024-09-28 10:37:46.543887] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:11.874 [2024-09-28 10:37:46.543910] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:11.874 [2024-09-28 10:37:46.543948] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:11.874 [2024-09-28 10:37:46.543987] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:11.874 [2024-09-28 10:37:46.544097] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:11.874 [2024-09-28 10:37:46.544109] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:11.874 [2024-09-28 10:37:46.544121] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:11.874 [2024-09-28 10:37:46.544136] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544145] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544154] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:11.874 [2024-09-28 10:37:46.544162] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:11.874 [2024-09-28 10:37:46.544171] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:11.874 [2024-09-28 10:37:46.544179] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:11.874 [2024-09-28 10:37:46.544188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.874 [2024-09-28 10:37:46.544200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:11.874 [2024-09-28 10:37:46.544210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:17:11.874 [2024-09-28 10:37:46.544220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.874 [2024-09-28 10:37:46.544304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.874 [2024-09-28 10:37:46.544317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:11.874 [2024-09-28 10:37:46.544326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:11.874 [2024-09-28 10:37:46.544338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.874 [2024-09-28 10:37:46.544437] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:11.874 [2024-09-28 10:37:46.544449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:11.874 [2024-09-28 10:37:46.544459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:11.874 [2024-09-28 10:37:46.544489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:11.874 [2024-09-28 10:37:46.544523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:11.874 [2024-09-28 10:37:46.544538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:11.874 [2024-09-28 10:37:46.544549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:11.874 [2024-09-28 10:37:46.544558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:11.874 [2024-09-28 10:37:46.544565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:11.874 [2024-09-28 10:37:46.544573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:11.874 [2024-09-28 10:37:46.544581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:11.874 [2024-09-28 10:37:46.544599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:11.874 [2024-09-28 10:37:46.544626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:11.874 [2024-09-28 10:37:46.544649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:11.874 [2024-09-28 10:37:46.544670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:11.874 [2024-09-28 10:37:46.544695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:11.874 [2024-09-28 10:37:46.544716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:11.874 [2024-09-28 10:37:46.544729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:11.874 [2024-09-28 10:37:46.544735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:11.874 [2024-09-28 10:37:46.544742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:11.874 [2024-09-28 10:37:46.544748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:11.874 [2024-09-28 10:37:46.544755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:11.874 [2024-09-28 10:37:46.544761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:11.874 [2024-09-28 10:37:46.544775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:11.874 [2024-09-28 10:37:46.544781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544793] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:11.874 [2024-09-28 10:37:46.544802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:11.874 [2024-09-28 10:37:46.544812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.874 [2024-09-28 10:37:46.544827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:11.874 [2024-09-28 10:37:46.544835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:11.874 [2024-09-28 10:37:46.544841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:11.874 [2024-09-28 10:37:46.544850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:11.874 [2024-09-28 10:37:46.544856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:11.874 [2024-09-28 10:37:46.544864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:11.874 [2024-09-28 10:37:46.544874] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:11.874 [2024-09-28 10:37:46.544883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:11.874 [2024-09-28 10:37:46.544892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:11.874 [2024-09-28 10:37:46.544899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:11.874 [2024-09-28 10:37:46.544908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:11.874 [2024-09-28 10:37:46.544915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:11.874 [2024-09-28 10:37:46.544925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:11.874 [2024-09-28 10:37:46.544933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:11.874 [2024-09-28 10:37:46.544942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:11.874 [2024-09-28 10:37:46.544949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:11.875 [2024-09-28 10:37:46.544956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:11.875 [2024-09-28 10:37:46.544986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:11.875 [2024-09-28 10:37:46.544994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:11.875 [2024-09-28 10:37:46.545002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:11.875 [2024-09-28 10:37:46.545009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:11.875 [2024-09-28 10:37:46.545017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:11.875 [2024-09-28 10:37:46.545024] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:11.875 [2024-09-28 10:37:46.545033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:11.875 [2024-09-28 10:37:46.545048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:11.875 [2024-09-28 10:37:46.545056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:11.875 [2024-09-28 10:37:46.545063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:11.875 [2024-09-28 10:37:46.545071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:11.875 [2024-09-28 10:37:46.545081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.545090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:11.875 [2024-09-28 10:37:46.545102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:17:11.875 [2024-09-28 10:37:46.545109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.570144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.570205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:11.875 [2024-09-28 10:37:46.570219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.969 ms 00:17:11.875 [2024-09-28 10:37:46.570228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.570324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.570335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:11.875 [2024-09-28 10:37:46.570345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:11.875 [2024-09-28 10:37:46.570353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.585032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.585076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:11.875 [2024-09-28 10:37:46.585088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.605 ms 00:17:11.875 [2024-09-28 10:37:46.585097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.585133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.585142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:11.875 [2024-09-28 10:37:46.585151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:11.875 [2024-09-28 10:37:46.585159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.585808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.585843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:11.875 [2024-09-28 10:37:46.585855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:17:11.875 [2024-09-28 10:37:46.585865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.586052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.586063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:11.875 [2024-09-28 10:37:46.586072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:17:11.875 [2024-09-28 10:37:46.586084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.594600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.594648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:11.875 [2024-09-28 10:37:46.594663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.491 ms 00:17:11.875 [2024-09-28 10:37:46.594672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.599144] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:11.875 [2024-09-28 10:37:46.599192] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:11.875 [2024-09-28 10:37:46.599205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.599213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:11.875 [2024-09-28 10:37:46.599223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.439 ms 00:17:11.875 [2024-09-28 10:37:46.599240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.615325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.615372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:11.875 [2024-09-28 10:37:46.615393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.014 ms 00:17:11.875 [2024-09-28 10:37:46.615402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.618329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.618547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:11.875 [2024-09-28 10:37:46.618566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.867 ms 00:17:11.875 [2024-09-28 10:37:46.618575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.621188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.621235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:11.875 [2024-09-28 10:37:46.621245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:17:11.875 [2024-09-28 10:37:46.621253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.875 [2024-09-28 10:37:46.621612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.875 [2024-09-28 10:37:46.621625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:11.875 [2024-09-28 10:37:46.621636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:17:11.875 [2024-09-28 10:37:46.621645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.136 [2024-09-28 10:37:46.650640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.136 [2024-09-28 10:37:46.650829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:12.136 [2024-09-28 10:37:46.650895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.974 ms 00:17:12.136 [2024-09-28 10:37:46.650920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.136 [2024-09-28 10:37:46.660004] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:12.136 [2024-09-28 10:37:46.664079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.136 [2024-09-28 10:37:46.664220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:12.136 [2024-09-28 10:37:46.664308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.964 ms 00:17:12.136 [2024-09-28 10:37:46.664333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.136 [2024-09-28 10:37:46.664436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.136 [2024-09-28 10:37:46.664463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:12.136 [2024-09-28 10:37:46.664545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:12.136 [2024-09-28 10:37:46.664569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.136 [2024-09-28 10:37:46.664668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.136 [2024-09-28 10:37:46.664694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:12.136 [2024-09-28 10:37:46.664752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:12.136 [2024-09-28 10:37:46.664770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.136 [2024-09-28 10:37:46.664800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.136 [2024-09-28 10:37:46.664810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:12.136 [2024-09-28 10:37:46.664819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:12.136 [2024-09-28 10:37:46.664830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.136 [2024-09-28 10:37:46.664872] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:12.136 [2024-09-28 10:37:46.664883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.136 [2024-09-28 10:37:46.664897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:12.136 [2024-09-28 10:37:46.664910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:12.136 [2024-09-28 10:37:46.664918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.136 [2024-09-28 10:37:46.670899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.136 [2024-09-28 10:37:46.671070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:12.136 [2024-09-28 10:37:46.671129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.957 ms 00:17:12.137 [2024-09-28 10:37:46.671153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.137 [2024-09-28 10:37:46.671247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.137 [2024-09-28 10:37:46.671288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:12.137 [2024-09-28 10:37:46.671314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:12.137 [2024-09-28 10:37:46.671335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.137 [2024-09-28 10:37:46.672874] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 147.525 ms, result 0 00:18:18.325  Copying: 18/1024 [MB] (18 MBps) Copying: 36/1024 [MB] (17 MBps) Copying: 59/1024 [MB] (23 MBps) Copying: 85/1024 [MB] (25 MBps) Copying: 107/1024 [MB] (22 MBps) Copying: 126/1024 [MB] (19 MBps) Copying: 147/1024 [MB] (20 MBps) Copying: 169/1024 [MB] (22 MBps) Copying: 183/1024 [MB] (14 MBps) Copying: 201/1024 [MB] (17 MBps) Copying: 223/1024 [MB] (21 MBps) Copying: 246/1024 [MB] (23 MBps) Copying: 266/1024 [MB] (19 MBps) Copying: 283/1024 [MB] (16 MBps) Copying: 294/1024 [MB] (11 MBps) Copying: 305/1024 [MB] (11 MBps) Copying: 317/1024 [MB] (11 MBps) Copying: 329/1024 [MB] (11 MBps) Copying: 340/1024 [MB] (11 MBps) Copying: 351/1024 [MB] (11 MBps) Copying: 362/1024 [MB] (11 MBps) Copying: 374/1024 [MB] (11 MBps) Copying: 385/1024 [MB] (11 MBps) Copying: 396/1024 [MB] (10 MBps) Copying: 407/1024 [MB] (11 MBps) Copying: 418/1024 [MB] (11 MBps) Copying: 430/1024 [MB] (11 MBps) Copying: 441/1024 [MB] (10 MBps) Copying: 452/1024 [MB] (11 MBps) Copying: 462/1024 [MB] (10 MBps) Copying: 473/1024 [MB] (10 MBps) Copying: 483/1024 [MB] (10 MBps) Copying: 493/1024 [MB] (10 MBps) Copying: 504/1024 [MB] (10 MBps) Copying: 514/1024 [MB] (10 MBps) Copying: 524/1024 [MB] (10 MBps) Copying: 547708/1048576 [kB] (10188 kBps) Copying: 545/1024 [MB] (10 MBps) Copying: 555/1024 [MB] (10 MBps) Copying: 566/1024 [MB] (10 MBps) Copying: 576/1024 [MB] (10 MBps) Copying: 587/1024 [MB] (10 MBps) Copying: 597/1024 [MB] (10 MBps) Copying: 622276/1048576 [kB] (10124 kBps) Copying: 617/1024 [MB] (10 MBps) Copying: 629/1024 [MB] (11 MBps) Copying: 640/1024 [MB] (11 MBps) Copying: 651/1024 [MB] (11 MBps) Copying: 662/1024 [MB] (11 MBps) Copying: 673/1024 [MB] (10 MBps) Copying: 684/1024 [MB] (11 MBps) Copying: 694/1024 [MB] (10 MBps) Copying: 733/1024 [MB] (38 MBps) Copying: 787/1024 [MB] (54 MBps) Copying: 817/1024 [MB] (29 MBps) Copying: 831/1024 [MB] (14 MBps) Copying: 850/1024 [MB] (18 MBps) Copying: 873/1024 [MB] (23 MBps) Copying: 894/1024 [MB] (21 MBps) Copying: 912/1024 [MB] (18 MBps) Copying: 936/1024 [MB] (24 MBps) Copying: 955/1024 [MB] (18 MBps) Copying: 971/1024 [MB] (16 MBps) Copying: 986/1024 [MB] (15 MBps) Copying: 1000/1024 [MB] (14 MBps) Copying: 1020/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 15 MBps)[2024-09-28 10:38:52.873784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.873852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:18.325 [2024-09-28 10:38:52.873868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:18.325 [2024-09-28 10:38:52.873878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.873905] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:18.325 [2024-09-28 10:38:52.874724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.874763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:18.325 [2024-09-28 10:38:52.874774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:18:18.325 [2024-09-28 10:38:52.874783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.877652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.877744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:18.325 [2024-09-28 10:38:52.877755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:18:18.325 [2024-09-28 10:38:52.877764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.894739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.894801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:18.325 [2024-09-28 10:38:52.894814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.958 ms 00:18:18.325 [2024-09-28 10:38:52.894822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.901046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.901102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:18.325 [2024-09-28 10:38:52.901114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.179 ms 00:18:18.325 [2024-09-28 10:38:52.901133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.903893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.903947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:18.325 [2024-09-28 10:38:52.903957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:18:18.325 [2024-09-28 10:38:52.903984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.908688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.908744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:18.325 [2024-09-28 10:38:52.908754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:18:18.325 [2024-09-28 10:38:52.908762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.908889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.908910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:18.325 [2024-09-28 10:38:52.908920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:18.325 [2024-09-28 10:38:52.908928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.911712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.911923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:18.325 [2024-09-28 10:38:52.911942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:18:18.325 [2024-09-28 10:38:52.911951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.914138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.914187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:18.325 [2024-09-28 10:38:52.914196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.123 ms 00:18:18.325 [2024-09-28 10:38:52.914203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.916085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.916133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:18.325 [2024-09-28 10:38:52.916158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.835 ms 00:18:18.325 [2024-09-28 10:38:52.916164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.917865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.325 [2024-09-28 10:38:52.917918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:18.325 [2024-09-28 10:38:52.917927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:18:18.325 [2024-09-28 10:38:52.917934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.325 [2024-09-28 10:38:52.917995] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:18.326 [2024-09-28 10:38:52.918019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:18.326 [2024-09-28 10:38:52.918740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:18.327 [2024-09-28 10:38:52.918747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:18.327 [2024-09-28 10:38:52.918755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:18.327 [2024-09-28 10:38:52.918762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:18.327 [2024-09-28 10:38:52.918769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:18.327 [2024-09-28 10:38:52.918776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:18.327 [2024-09-28 10:38:52.918784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:18.327 [2024-09-28 10:38:52.918792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:18.327 [2024-09-28 10:38:52.918800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:18.327 [2024-09-28 10:38:52.918807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:18.327 [2024-09-28 10:38:52.918827] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:18.327 [2024-09-28 10:38:52.918835] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5266b08-7571-4e39-9a75-9919ffbc7afd 00:18:18.327 [2024-09-28 10:38:52.918843] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:18.327 [2024-09-28 10:38:52.918851] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:18.327 [2024-09-28 10:38:52.918858] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:18.327 [2024-09-28 10:38:52.918867] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:18.327 [2024-09-28 10:38:52.918879] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:18.327 [2024-09-28 10:38:52.918888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:18.327 [2024-09-28 10:38:52.918895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:18.327 [2024-09-28 10:38:52.918901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:18.327 [2024-09-28 10:38:52.918908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:18.327 [2024-09-28 10:38:52.918916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.327 [2024-09-28 10:38:52.918928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:18.327 [2024-09-28 10:38:52.918940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:18:18.327 [2024-09-28 10:38:52.918948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.921524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.327 [2024-09-28 10:38:52.921697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:18.327 [2024-09-28 10:38:52.921722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:18:18.327 [2024-09-28 10:38:52.921730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.921853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.327 [2024-09-28 10:38:52.921867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:18.327 [2024-09-28 10:38:52.921881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:18.327 [2024-09-28 10:38:52.921889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.928812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.929005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.327 [2024-09-28 10:38:52.929024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.929032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.929099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.929120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.327 [2024-09-28 10:38:52.929133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.929140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.929188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.929198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.327 [2024-09-28 10:38:52.929205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.929213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.929229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.929237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.327 [2024-09-28 10:38:52.929248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.929259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.942334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.942388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.327 [2024-09-28 10:38:52.942401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.942408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.952619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.952790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.327 [2024-09-28 10:38:52.952816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.952824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.952879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.952888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:18.327 [2024-09-28 10:38:52.952902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.952910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.952946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.952954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:18.327 [2024-09-28 10:38:52.953136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.953160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.953244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.953255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:18.327 [2024-09-28 10:38:52.953263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.953271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.953299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.953308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:18.327 [2024-09-28 10:38:52.953316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.953324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.953365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.953374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:18.327 [2024-09-28 10:38:52.953382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.953389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.953439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.327 [2024-09-28 10:38:52.953449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:18.327 [2024-09-28 10:38:52.953458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.327 [2024-09-28 10:38:52.953468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.327 [2024-09-28 10:38:52.953597] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 79.780 ms, result 0 00:18:18.589 00:18:18.589 00:18:18.589 10:38:53 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:18.589 [2024-09-28 10:38:53.322785] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:18:18.589 [2024-09-28 10:38:53.323181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87964 ] 00:18:18.849 [2024-09-28 10:38:53.455826] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:18.849 [2024-09-28 10:38:53.476446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.849 [2024-09-28 10:38:53.527645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.112 [2024-09-28 10:38:53.639218] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.112 [2024-09-28 10:38:53.639512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.112 [2024-09-28 10:38:53.800404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.800624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:19.112 [2024-09-28 10:38:53.800832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:19.112 [2024-09-28 10:38:53.800875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.112 [2024-09-28 10:38:53.800996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.801025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.112 [2024-09-28 10:38:53.801046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:19.112 [2024-09-28 10:38:53.801065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.112 [2024-09-28 10:38:53.801105] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:19.112 [2024-09-28 10:38:53.801513] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:19.112 [2024-09-28 10:38:53.801584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.801596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.112 [2024-09-28 10:38:53.801607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:18:19.112 [2024-09-28 10:38:53.801618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.112 [2024-09-28 10:38:53.803422] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:19.112 [2024-09-28 10:38:53.807377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.807430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:19.112 [2024-09-28 10:38:53.807449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.957 ms 00:18:19.112 [2024-09-28 10:38:53.807462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.112 [2024-09-28 10:38:53.807540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.807553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:19.112 [2024-09-28 10:38:53.807562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:19.112 [2024-09-28 10:38:53.807570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.112 [2024-09-28 10:38:53.815838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.816048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.112 [2024-09-28 10:38:53.816076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.224 ms 00:18:19.112 [2024-09-28 10:38:53.816087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.112 [2024-09-28 10:38:53.816173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.816182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.112 [2024-09-28 10:38:53.816191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:19.112 [2024-09-28 10:38:53.816199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.112 [2024-09-28 10:38:53.816259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.816270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:19.112 [2024-09-28 10:38:53.816278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:19.112 [2024-09-28 10:38:53.816286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.112 [2024-09-28 10:38:53.816315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:19.112 [2024-09-28 10:38:53.818384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.818411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.112 [2024-09-28 10:38:53.818431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:18:19.112 [2024-09-28 10:38:53.818438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.112 [2024-09-28 10:38:53.818474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.818483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:19.112 [2024-09-28 10:38:53.818491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:19.112 [2024-09-28 10:38:53.818499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.112 [2024-09-28 10:38:53.818529] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:19.112 [2024-09-28 10:38:53.818552] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:19.112 [2024-09-28 10:38:53.818589] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:19.112 [2024-09-28 10:38:53.818610] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:19.112 [2024-09-28 10:38:53.818717] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:19.112 [2024-09-28 10:38:53.818727] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:19.112 [2024-09-28 10:38:53.818739] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:19.112 [2024-09-28 10:38:53.818754] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:19.112 [2024-09-28 10:38:53.818768] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:19.112 [2024-09-28 10:38:53.818776] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:19.112 [2024-09-28 10:38:53.818784] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:19.112 [2024-09-28 10:38:53.818792] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:19.112 [2024-09-28 10:38:53.818805] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:19.112 [2024-09-28 10:38:53.818814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.112 [2024-09-28 10:38:53.818822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:19.113 [2024-09-28 10:38:53.818830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:19.113 [2024-09-28 10:38:53.818840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.113 [2024-09-28 10:38:53.818923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.113 [2024-09-28 10:38:53.818936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:19.113 [2024-09-28 10:38:53.818947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:19.113 [2024-09-28 10:38:53.818955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.113 [2024-09-28 10:38:53.819070] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:19.113 [2024-09-28 10:38:53.819083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:19.113 [2024-09-28 10:38:53.819092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.113 [2024-09-28 10:38:53.819106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:19.113 [2024-09-28 10:38:53.819123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:19.113 [2024-09-28 10:38:53.819140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:19.113 [2024-09-28 10:38:53.819154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.113 [2024-09-28 10:38:53.819186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:19.113 [2024-09-28 10:38:53.819196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:19.113 [2024-09-28 10:38:53.819204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.113 [2024-09-28 10:38:53.819212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:19.113 [2024-09-28 10:38:53.819220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:19.113 [2024-09-28 10:38:53.819228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:19.113 [2024-09-28 10:38:53.819244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:19.113 [2024-09-28 10:38:53.819253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:19.113 [2024-09-28 10:38:53.819271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.113 [2024-09-28 10:38:53.819287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:19.113 [2024-09-28 10:38:53.819296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.113 [2024-09-28 10:38:53.819312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:19.113 [2024-09-28 10:38:53.819319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.113 [2024-09-28 10:38:53.819343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:19.113 [2024-09-28 10:38:53.819351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.113 [2024-09-28 10:38:53.819366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:19.113 [2024-09-28 10:38:53.819375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.113 [2024-09-28 10:38:53.819390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:19.113 [2024-09-28 10:38:53.819398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:19.113 [2024-09-28 10:38:53.819405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.113 [2024-09-28 10:38:53.819412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:19.113 [2024-09-28 10:38:53.819419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:19.113 [2024-09-28 10:38:53.819425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:19.113 [2024-09-28 10:38:53.819439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:19.113 [2024-09-28 10:38:53.819446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819455] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:19.113 [2024-09-28 10:38:53.819463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:19.113 [2024-09-28 10:38:53.819473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.113 [2024-09-28 10:38:53.819481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.113 [2024-09-28 10:38:53.819489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:19.113 [2024-09-28 10:38:53.819495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:19.113 [2024-09-28 10:38:53.819502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:19.113 [2024-09-28 10:38:53.819509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:19.113 [2024-09-28 10:38:53.819515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:19.113 [2024-09-28 10:38:53.819523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:19.113 [2024-09-28 10:38:53.819532] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:19.113 [2024-09-28 10:38:53.819542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.113 [2024-09-28 10:38:53.819552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:19.113 [2024-09-28 10:38:53.819559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:19.113 [2024-09-28 10:38:53.819567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:19.113 [2024-09-28 10:38:53.819574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:19.113 [2024-09-28 10:38:53.819583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:19.113 [2024-09-28 10:38:53.819591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:19.113 [2024-09-28 10:38:53.819598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:19.113 [2024-09-28 10:38:53.819605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:19.113 [2024-09-28 10:38:53.819613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:19.113 [2024-09-28 10:38:53.819620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:19.113 [2024-09-28 10:38:53.819627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:19.113 [2024-09-28 10:38:53.819635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:19.113 [2024-09-28 10:38:53.819641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:19.113 [2024-09-28 10:38:53.819648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:19.113 [2024-09-28 10:38:53.819655] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:19.113 [2024-09-28 10:38:53.819664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.113 [2024-09-28 10:38:53.819674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:19.113 [2024-09-28 10:38:53.819682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:19.113 [2024-09-28 10:38:53.819689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:19.113 [2024-09-28 10:38:53.819696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:19.113 [2024-09-28 10:38:53.819706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.113 [2024-09-28 10:38:53.819713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:19.113 [2024-09-28 10:38:53.819721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:18:19.113 [2024-09-28 10:38:53.819728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.113 [2024-09-28 10:38:53.843399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.113 [2024-09-28 10:38:53.843662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:19.113 [2024-09-28 10:38:53.843917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.620 ms 00:18:19.113 [2024-09-28 10:38:53.844029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.113 [2024-09-28 10:38:53.844204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.113 [2024-09-28 10:38:53.844329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:19.113 [2024-09-28 10:38:53.844368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:19.113 [2024-09-28 10:38:53.844413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.113 [2024-09-28 10:38:53.856877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.114 [2024-09-28 10:38:53.857058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:19.114 [2024-09-28 10:38:53.857132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.308 ms 00:18:19.114 [2024-09-28 10:38:53.857154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.114 [2024-09-28 10:38:53.857206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.114 [2024-09-28 10:38:53.857228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:19.114 [2024-09-28 10:38:53.857247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:19.114 [2024-09-28 10:38:53.857266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.114 [2024-09-28 10:38:53.857829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.114 [2024-09-28 10:38:53.857883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:19.114 [2024-09-28 10:38:53.857915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:18:19.114 [2024-09-28 10:38:53.857933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.114 [2024-09-28 10:38:53.858115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.114 [2024-09-28 10:38:53.858140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:19.114 [2024-09-28 10:38:53.858161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:19.114 [2024-09-28 10:38:53.858181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.114 [2024-09-28 10:38:53.865349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.114 [2024-09-28 10:38:53.865503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:19.114 [2024-09-28 10:38:53.865557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.072 ms 00:18:19.114 [2024-09-28 10:38:53.865579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.114 [2024-09-28 10:38:53.869470] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:19.114 [2024-09-28 10:38:53.869640] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:19.114 [2024-09-28 10:38:53.869694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.114 [2024-09-28 10:38:53.869704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:19.114 [2024-09-28 10:38:53.869723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.004 ms 00:18:19.114 [2024-09-28 10:38:53.869730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.114 [2024-09-28 10:38:53.885700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.114 [2024-09-28 10:38:53.885755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:19.114 [2024-09-28 10:38:53.885768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.844 ms 00:18:19.114 [2024-09-28 10:38:53.885776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.374 [2024-09-28 10:38:53.888932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.374 [2024-09-28 10:38:53.889118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:19.375 [2024-09-28 10:38:53.889137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.101 ms 00:18:19.375 [2024-09-28 10:38:53.889144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.891919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.375 [2024-09-28 10:38:53.891983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:19.375 [2024-09-28 10:38:53.891994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.730 ms 00:18:19.375 [2024-09-28 10:38:53.892001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.892352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.375 [2024-09-28 10:38:53.892364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:19.375 [2024-09-28 10:38:53.892380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:19.375 [2024-09-28 10:38:53.892391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.915784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.375 [2024-09-28 10:38:53.916034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:19.375 [2024-09-28 10:38:53.916055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.370 ms 00:18:19.375 [2024-09-28 10:38:53.916065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.924433] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:19.375 [2024-09-28 10:38:53.927765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.375 [2024-09-28 10:38:53.927932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:19.375 [2024-09-28 10:38:53.927951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.556 ms 00:18:19.375 [2024-09-28 10:38:53.927989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.928079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.375 [2024-09-28 10:38:53.928095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:19.375 [2024-09-28 10:38:53.928106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:19.375 [2024-09-28 10:38:53.928115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.928186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.375 [2024-09-28 10:38:53.928201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:19.375 [2024-09-28 10:38:53.928213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:19.375 [2024-09-28 10:38:53.928221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.928242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.375 [2024-09-28 10:38:53.928251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:19.375 [2024-09-28 10:38:53.928259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:19.375 [2024-09-28 10:38:53.928271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.928311] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:19.375 [2024-09-28 10:38:53.928322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.375 [2024-09-28 10:38:53.928330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:19.375 [2024-09-28 10:38:53.928338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:19.375 [2024-09-28 10:38:53.928349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.933706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.375 [2024-09-28 10:38:53.933753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:19.375 [2024-09-28 10:38:53.933764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.337 ms 00:18:19.375 [2024-09-28 10:38:53.933772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.933861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.375 [2024-09-28 10:38:53.933872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:19.375 [2024-09-28 10:38:53.933881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:19.375 [2024-09-28 10:38:53.933892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.375 [2024-09-28 10:38:53.935277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.300 ms, result 0 00:19:28.143  Copying: 19/1024 [MB] (19 MBps) Copying: 37/1024 [MB] (17 MBps) Copying: 60/1024 [MB] (22 MBps) Copying: 85/1024 [MB] (24 MBps) Copying: 96/1024 [MB] (10 MBps) Copying: 106/1024 [MB] (10 MBps) Copying: 117/1024 [MB] (10 MBps) Copying: 127/1024 [MB] (10 MBps) Copying: 138/1024 [MB] (10 MBps) Copying: 149/1024 [MB] (10 MBps) Copying: 160/1024 [MB] (10 MBps) Copying: 170/1024 [MB] (10 MBps) Copying: 181/1024 [MB] (10 MBps) Copying: 192/1024 [MB] (10 MBps) Copying: 203/1024 [MB] (10 MBps) Copying: 217/1024 [MB] (13 MBps) Copying: 237/1024 [MB] (19 MBps) Copying: 251/1024 [MB] (13 MBps) Copying: 262/1024 [MB] (11 MBps) Copying: 273/1024 [MB] (10 MBps) Copying: 290/1024 [MB] (16 MBps) Copying: 308/1024 [MB] (18 MBps) Copying: 326/1024 [MB] (17 MBps) Copying: 345/1024 [MB] (18 MBps) Copying: 358/1024 [MB] (12 MBps) Copying: 381/1024 [MB] (23 MBps) Copying: 402/1024 [MB] (20 MBps) Copying: 429/1024 [MB] (27 MBps) Copying: 440/1024 [MB] (10 MBps) Copying: 450/1024 [MB] (10 MBps) Copying: 461/1024 [MB] (10 MBps) Copying: 472/1024 [MB] (10 MBps) Copying: 483/1024 [MB] (10 MBps) Copying: 493/1024 [MB] (10 MBps) Copying: 512/1024 [MB] (18 MBps) Copying: 525/1024 [MB] (12 MBps) Copying: 546/1024 [MB] (21 MBps) Copying: 570/1024 [MB] (23 MBps) Copying: 584/1024 [MB] (14 MBps) Copying: 601/1024 [MB] (17 MBps) Copying: 611/1024 [MB] (10 MBps) Copying: 628/1024 [MB] (16 MBps) Copying: 646/1024 [MB] (17 MBps) Copying: 663/1024 [MB] (17 MBps) Copying: 675/1024 [MB] (11 MBps) Copying: 688/1024 [MB] (13 MBps) Copying: 705/1024 [MB] (16 MBps) Copying: 717/1024 [MB] (12 MBps) Copying: 731/1024 [MB] (13 MBps) Copying: 753/1024 [MB] (22 MBps) Copying: 769/1024 [MB] (16 MBps) Copying: 780/1024 [MB] (10 MBps) Copying: 791/1024 [MB] (10 MBps) Copying: 811/1024 [MB] (19 MBps) Copying: 830/1024 [MB] (18 MBps) Copying: 841/1024 [MB] (11 MBps) Copying: 852/1024 [MB] (10 MBps) Copying: 862/1024 [MB] (10 MBps) Copying: 874/1024 [MB] (11 MBps) Copying: 893/1024 [MB] (19 MBps) Copying: 917/1024 [MB] (23 MBps) Copying: 938/1024 [MB] (20 MBps) Copying: 951/1024 [MB] (13 MBps) Copying: 961/1024 [MB] (10 MBps) Copying: 975/1024 [MB] (13 MBps) Copying: 993/1024 [MB] (18 MBps) Copying: 1010/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 15 MBps)[2024-09-28 10:40:02.566807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.566926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.143 [2024-09-28 10:40:02.566991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:28.143 [2024-09-28 10:40:02.567033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.567131] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.143 [2024-09-28 10:40:02.567808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.567949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.143 [2024-09-28 10:40:02.567979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:19:28.143 [2024-09-28 10:40:02.567987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.568207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.568217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.143 [2024-09-28 10:40:02.568225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:19:28.143 [2024-09-28 10:40:02.568233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.571678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.571698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.143 [2024-09-28 10:40:02.571708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.429 ms 00:19:28.143 [2024-09-28 10:40:02.571716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.578493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.578607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.143 [2024-09-28 10:40:02.578630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.761 ms 00:19:28.143 [2024-09-28 10:40:02.578638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.580739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.580773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.143 [2024-09-28 10:40:02.580782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.043 ms 00:19:28.143 [2024-09-28 10:40:02.580789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.583821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.583858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.143 [2024-09-28 10:40:02.583868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.998 ms 00:19:28.143 [2024-09-28 10:40:02.583875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.584001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.584011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.143 [2024-09-28 10:40:02.584019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:28.143 [2024-09-28 10:40:02.584027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.586574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.586606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.143 [2024-09-28 10:40:02.586615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.527 ms 00:19:28.143 [2024-09-28 10:40:02.586621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.588701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.588735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.143 [2024-09-28 10:40:02.588744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.050 ms 00:19:28.143 [2024-09-28 10:40:02.588751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.590615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.590656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.143 [2024-09-28 10:40:02.590665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.833 ms 00:19:28.143 [2024-09-28 10:40:02.590671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.592242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.143 [2024-09-28 10:40:02.592359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.143 [2024-09-28 10:40:02.592373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:19:28.143 [2024-09-28 10:40:02.592380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.143 [2024-09-28 10:40:02.592406] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.143 [2024-09-28 10:40:02.592420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.143 [2024-09-28 10:40:02.592526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.592995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.144 [2024-09-28 10:40:02.593327] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.144 [2024-09-28 10:40:02.593336] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5266b08-7571-4e39-9a75-9919ffbc7afd 00:19:28.144 [2024-09-28 10:40:02.593350] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.144 [2024-09-28 10:40:02.593357] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.144 [2024-09-28 10:40:02.593365] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.144 [2024-09-28 10:40:02.593372] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.145 [2024-09-28 10:40:02.593379] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.145 [2024-09-28 10:40:02.593393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.145 [2024-09-28 10:40:02.593404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.145 [2024-09-28 10:40:02.593410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.145 [2024-09-28 10:40:02.593417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.145 [2024-09-28 10:40:02.593424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.145 [2024-09-28 10:40:02.593434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.145 [2024-09-28 10:40:02.593442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:19:28.145 [2024-09-28 10:40:02.593449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.594954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.145 [2024-09-28 10:40:02.594984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.145 [2024-09-28 10:40:02.594993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:19:28.145 [2024-09-28 10:40:02.595007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.595202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.145 [2024-09-28 10:40:02.595220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.145 [2024-09-28 10:40:02.595232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:19:28.145 [2024-09-28 10:40:02.595239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.599776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.599900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.145 [2024-09-28 10:40:02.599915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.599923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.599998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.600007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.145 [2024-09-28 10:40:02.600014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.600021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.600084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.600094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.145 [2024-09-28 10:40:02.600102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.600109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.600128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.600138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.145 [2024-09-28 10:40:02.600146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.600153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.609101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.609138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.145 [2024-09-28 10:40:02.609147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.609154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.616319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.616362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.145 [2024-09-28 10:40:02.616372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.616379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.616403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.616417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.145 [2024-09-28 10:40:02.616425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.616432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.616475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.616484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.145 [2024-09-28 10:40:02.616494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.616505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.616568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.616578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.145 [2024-09-28 10:40:02.616586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.616594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.616626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.616635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.145 [2024-09-28 10:40:02.616646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.616656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.616689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.616697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.145 [2024-09-28 10:40:02.616705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.616712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.616750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.145 [2024-09-28 10:40:02.616759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.145 [2024-09-28 10:40:02.616770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.145 [2024-09-28 10:40:02.616778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.145 [2024-09-28 10:40:02.616885] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.087 ms, result 0 00:19:28.145 00:19:28.145 00:19:28.145 10:40:02 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:30.692 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:30.692 10:40:05 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:30.692 [2024-09-28 10:40:05.105533] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:19:30.692 [2024-09-28 10:40:05.105678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88706 ] 00:19:30.692 [2024-09-28 10:40:05.237426] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:30.692 [2024-09-28 10:40:05.259269] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.692 [2024-09-28 10:40:05.308790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.692 [2024-09-28 10:40:05.417823] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.692 [2024-09-28 10:40:05.417908] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.955 [2024-09-28 10:40:05.579704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.955 [2024-09-28 10:40:05.579776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:30.955 [2024-09-28 10:40:05.579792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.955 [2024-09-28 10:40:05.579801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.955 [2024-09-28 10:40:05.579860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.955 [2024-09-28 10:40:05.579871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.955 [2024-09-28 10:40:05.579880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:30.955 [2024-09-28 10:40:05.579888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.955 [2024-09-28 10:40:05.579912] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:30.955 [2024-09-28 10:40:05.580215] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:30.955 [2024-09-28 10:40:05.580233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.955 [2024-09-28 10:40:05.580244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.955 [2024-09-28 10:40:05.580254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:30.955 [2024-09-28 10:40:05.580265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.955 [2024-09-28 10:40:05.582056] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:30.955 [2024-09-28 10:40:05.586172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.955 [2024-09-28 10:40:05.586229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:30.955 [2024-09-28 10:40:05.586249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.119 ms 00:19:30.956 [2024-09-28 10:40:05.586258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.956 [2024-09-28 10:40:05.586342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.956 [2024-09-28 10:40:05.586355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:30.956 [2024-09-28 10:40:05.586365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:30.956 [2024-09-28 10:40:05.586378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.956 [2024-09-28 10:40:05.595214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.956 [2024-09-28 10:40:05.595259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.956 [2024-09-28 10:40:05.595270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.787 ms 00:19:30.956 [2024-09-28 10:40:05.595287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.956 [2024-09-28 10:40:05.595382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.956 [2024-09-28 10:40:05.595392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.956 [2024-09-28 10:40:05.595401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:30.956 [2024-09-28 10:40:05.595413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.956 [2024-09-28 10:40:05.595476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.956 [2024-09-28 10:40:05.595487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:30.956 [2024-09-28 10:40:05.595496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:30.956 [2024-09-28 10:40:05.595503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.956 [2024-09-28 10:40:05.595530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.956 [2024-09-28 10:40:05.597775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.956 [2024-09-28 10:40:05.598031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.956 [2024-09-28 10:40:05.598053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.251 ms 00:19:30.956 [2024-09-28 10:40:05.598061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.956 [2024-09-28 10:40:05.598104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.956 [2024-09-28 10:40:05.598122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:30.956 [2024-09-28 10:40:05.598131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:30.956 [2024-09-28 10:40:05.598139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.956 [2024-09-28 10:40:05.598171] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:30.956 [2024-09-28 10:40:05.598197] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:30.956 [2024-09-28 10:40:05.598239] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:30.956 [2024-09-28 10:40:05.598255] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:30.956 [2024-09-28 10:40:05.598361] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:30.956 [2024-09-28 10:40:05.598373] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:30.956 [2024-09-28 10:40:05.598389] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:30.956 [2024-09-28 10:40:05.598402] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:30.956 [2024-09-28 10:40:05.598411] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:30.956 [2024-09-28 10:40:05.598420] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:30.956 [2024-09-28 10:40:05.598429] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:30.956 [2024-09-28 10:40:05.598440] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:30.956 [2024-09-28 10:40:05.598449] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:30.956 [2024-09-28 10:40:05.598605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.956 [2024-09-28 10:40:05.598614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:30.956 [2024-09-28 10:40:05.598624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:19:30.956 [2024-09-28 10:40:05.598635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.956 [2024-09-28 10:40:05.598722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.956 [2024-09-28 10:40:05.598734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:30.956 [2024-09-28 10:40:05.598744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:30.956 [2024-09-28 10:40:05.598753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.956 [2024-09-28 10:40:05.598858] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:30.956 [2024-09-28 10:40:05.598869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:30.956 [2024-09-28 10:40:05.598879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.956 [2024-09-28 10:40:05.598896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.956 [2024-09-28 10:40:05.598905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:30.956 [2024-09-28 10:40:05.598913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:30.956 [2024-09-28 10:40:05.598921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:30.956 [2024-09-28 10:40:05.598929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:30.956 [2024-09-28 10:40:05.598942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:30.956 [2024-09-28 10:40:05.598949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.956 [2024-09-28 10:40:05.598956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:30.956 [2024-09-28 10:40:05.598983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:30.956 [2024-09-28 10:40:05.598990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.956 [2024-09-28 10:40:05.598998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:30.956 [2024-09-28 10:40:05.599005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:30.956 [2024-09-28 10:40:05.599012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.956 [2024-09-28 10:40:05.599022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:30.956 [2024-09-28 10:40:05.599036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:30.956 [2024-09-28 10:40:05.599044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.956 [2024-09-28 10:40:05.599081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:30.956 [2024-09-28 10:40:05.599095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:30.956 [2024-09-28 10:40:05.599102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.956 [2024-09-28 10:40:05.599110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:30.956 [2024-09-28 10:40:05.599117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:30.956 [2024-09-28 10:40:05.599124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.956 [2024-09-28 10:40:05.599131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:30.956 [2024-09-28 10:40:05.599138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:30.956 [2024-09-28 10:40:05.599151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.956 [2024-09-28 10:40:05.599158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:30.956 [2024-09-28 10:40:05.599166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:30.956 [2024-09-28 10:40:05.599173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.956 [2024-09-28 10:40:05.599180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:30.956 [2024-09-28 10:40:05.599188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:30.956 [2024-09-28 10:40:05.599194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.956 [2024-09-28 10:40:05.599201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:30.956 [2024-09-28 10:40:05.599208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:30.956 [2024-09-28 10:40:05.599215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.956 [2024-09-28 10:40:05.599222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:30.956 [2024-09-28 10:40:05.599230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:30.956 [2024-09-28 10:40:05.599237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.956 [2024-09-28 10:40:05.599244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:30.956 [2024-09-28 10:40:05.599250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:30.956 [2024-09-28 10:40:05.599256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.956 [2024-09-28 10:40:05.599266] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:30.956 [2024-09-28 10:40:05.599279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:30.956 [2024-09-28 10:40:05.599289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.956 [2024-09-28 10:40:05.599301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.956 [2024-09-28 10:40:05.599310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:30.956 [2024-09-28 10:40:05.599319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:30.956 [2024-09-28 10:40:05.599326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:30.956 [2024-09-28 10:40:05.599333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:30.956 [2024-09-28 10:40:05.599340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:30.956 [2024-09-28 10:40:05.599347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:30.956 [2024-09-28 10:40:05.599355] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:30.956 [2024-09-28 10:40:05.599365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.956 [2024-09-28 10:40:05.599375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:30.956 [2024-09-28 10:40:05.599383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:30.957 [2024-09-28 10:40:05.599390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:30.957 [2024-09-28 10:40:05.599397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:30.957 [2024-09-28 10:40:05.599407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:30.957 [2024-09-28 10:40:05.599415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:30.957 [2024-09-28 10:40:05.599422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:30.957 [2024-09-28 10:40:05.599429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:30.957 [2024-09-28 10:40:05.599436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:30.957 [2024-09-28 10:40:05.599443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:30.957 [2024-09-28 10:40:05.599451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:30.957 [2024-09-28 10:40:05.599457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:30.957 [2024-09-28 10:40:05.599464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:30.957 [2024-09-28 10:40:05.599472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:30.957 [2024-09-28 10:40:05.599478] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:30.957 [2024-09-28 10:40:05.599487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.957 [2024-09-28 10:40:05.599500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:30.957 [2024-09-28 10:40:05.599508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:30.957 [2024-09-28 10:40:05.599515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:30.957 [2024-09-28 10:40:05.599522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:30.957 [2024-09-28 10:40:05.599533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.599541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:30.957 [2024-09-28 10:40:05.599552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:19:30.957 [2024-09-28 10:40:05.599564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.621745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.621815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.957 [2024-09-28 10:40:05.621830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.107 ms 00:19:30.957 [2024-09-28 10:40:05.621838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.621932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.621942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.957 [2024-09-28 10:40:05.621951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:30.957 [2024-09-28 10:40:05.621987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.634060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.634108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.957 [2024-09-28 10:40:05.634121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.005 ms 00:19:30.957 [2024-09-28 10:40:05.634136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.634174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.634185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.957 [2024-09-28 10:40:05.634196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:30.957 [2024-09-28 10:40:05.634211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.634730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.634755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.957 [2024-09-28 10:40:05.634768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:19:30.957 [2024-09-28 10:40:05.634779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.634942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.634954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.957 [2024-09-28 10:40:05.635000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:30.957 [2024-09-28 10:40:05.635011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.641439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.641488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.957 [2024-09-28 10:40:05.641498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.401 ms 00:19:30.957 [2024-09-28 10:40:05.641510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.645138] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:30.957 [2024-09-28 10:40:05.645190] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:30.957 [2024-09-28 10:40:05.645202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.645211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:30.957 [2024-09-28 10:40:05.645227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.604 ms 00:19:30.957 [2024-09-28 10:40:05.645234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.660873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.661063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:30.957 [2024-09-28 10:40:05.661086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.582 ms 00:19:30.957 [2024-09-28 10:40:05.661095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.663814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.663861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:30.957 [2024-09-28 10:40:05.663872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.676 ms 00:19:30.957 [2024-09-28 10:40:05.663879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.665986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.666025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:30.957 [2024-09-28 10:40:05.666035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:19:30.957 [2024-09-28 10:40:05.666042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.666399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.666418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.957 [2024-09-28 10:40:05.666427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:30.957 [2024-09-28 10:40:05.666435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.687772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.687832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:30.957 [2024-09-28 10:40:05.687845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.315 ms 00:19:30.957 [2024-09-28 10:40:05.687854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.695987] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:30.957 [2024-09-28 10:40:05.698982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.699166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.957 [2024-09-28 10:40:05.699184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.067 ms 00:19:30.957 [2024-09-28 10:40:05.699199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.699280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.699292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:30.957 [2024-09-28 10:40:05.699301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:30.957 [2024-09-28 10:40:05.699314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.699382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.699392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.957 [2024-09-28 10:40:05.699405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:30.957 [2024-09-28 10:40:05.699413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.699432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.699441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.957 [2024-09-28 10:40:05.699454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.957 [2024-09-28 10:40:05.699465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.699500] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:30.957 [2024-09-28 10:40:05.699510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.699524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:30.957 [2024-09-28 10:40:05.699532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:30.957 [2024-09-28 10:40:05.699542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.704375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.704416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.957 [2024-09-28 10:40:05.704427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.815 ms 00:19:30.957 [2024-09-28 10:40:05.704435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.957 [2024-09-28 10:40:05.704523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.957 [2024-09-28 10:40:05.704532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.957 [2024-09-28 10:40:05.704541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:30.957 [2024-09-28 10:40:05.704552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.958 [2024-09-28 10:40:05.705599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 125.476 ms, result 0 00:20:26.494  Copying: 18/1024 [MB] (18 MBps) Copying: 31/1024 [MB] (12 MBps) Copying: 47/1024 [MB] (16 MBps) Copying: 65/1024 [MB] (17 MBps) Copying: 99/1024 [MB] (33 MBps) Copying: 131/1024 [MB] (31 MBps) Copying: 141/1024 [MB] (10 MBps) Copying: 154756/1048576 [kB] (10220 kBps) Copying: 172/1024 [MB] (21 MBps) Copying: 183/1024 [MB] (10 MBps) Copying: 199/1024 [MB] (15 MBps) Copying: 212/1024 [MB] (12 MBps) Copying: 233/1024 [MB] (20 MBps) Copying: 261/1024 [MB] (28 MBps) Copying: 275/1024 [MB] (14 MBps) Copying: 293/1024 [MB] (17 MBps) Copying: 330/1024 [MB] (37 MBps) Copying: 373/1024 [MB] (42 MBps) Copying: 391/1024 [MB] (17 MBps) Copying: 403/1024 [MB] (11 MBps) Copying: 417/1024 [MB] (13 MBps) Copying: 436/1024 [MB] (19 MBps) Copying: 449/1024 [MB] (12 MBps) Copying: 460/1024 [MB] (10 MBps) Copying: 474/1024 [MB] (13 MBps) Copying: 485/1024 [MB] (11 MBps) Copying: 498/1024 [MB] (12 MBps) Copying: 512/1024 [MB] (14 MBps) Copying: 526/1024 [MB] (14 MBps) Copying: 541/1024 [MB] (14 MBps) Copying: 553/1024 [MB] (12 MBps) Copying: 571/1024 [MB] (17 MBps) Copying: 586/1024 [MB] (14 MBps) Copying: 598/1024 [MB] (12 MBps) Copying: 616/1024 [MB] (18 MBps) Copying: 631/1024 [MB] (14 MBps) Copying: 657/1024 [MB] (26 MBps) Copying: 695/1024 [MB] (37 MBps) Copying: 713/1024 [MB] (17 MBps) Copying: 728/1024 [MB] (14 MBps) Copying: 754/1024 [MB] (26 MBps) Copying: 778/1024 [MB] (23 MBps) Copying: 791/1024 [MB] (13 MBps) Copying: 824/1024 [MB] (32 MBps) Copying: 853/1024 [MB] (28 MBps) Copying: 865/1024 [MB] (12 MBps) Copying: 892/1024 [MB] (26 MBps) Copying: 935/1024 [MB] (43 MBps) Copying: 952/1024 [MB] (16 MBps) Copying: 967/1024 [MB] (14 MBps) Copying: 984/1024 [MB] (17 MBps) Copying: 995/1024 [MB] (10 MBps) Copying: 1009/1024 [MB] (14 MBps) Copying: 1044004/1048576 [kB] (10152 kBps) Copying: 1048252/1048576 [kB] (4248 kBps) Copying: 1024/1024 [MB] (average 18 MBps)[2024-09-28 10:41:01.106007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.494 [2024-09-28 10:41:01.106078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:26.494 [2024-09-28 10:41:01.106095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:26.494 [2024-09-28 10:41:01.106105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.494 [2024-09-28 10:41:01.107934] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:26.494 [2024-09-28 10:41:01.112536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.494 [2024-09-28 10:41:01.112709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:26.494 [2024-09-28 10:41:01.112783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.521 ms 00:20:26.494 [2024-09-28 10:41:01.112821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.494 [2024-09-28 10:41:01.124975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.494 [2024-09-28 10:41:01.125143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:26.494 [2024-09-28 10:41:01.125214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.885 ms 00:20:26.494 [2024-09-28 10:41:01.125240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.494 [2024-09-28 10:41:01.149907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.494 [2024-09-28 10:41:01.150104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:26.494 [2024-09-28 10:41:01.150178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.617 ms 00:20:26.494 [2024-09-28 10:41:01.150203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.494 [2024-09-28 10:41:01.156427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.494 [2024-09-28 10:41:01.156574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:26.494 [2024-09-28 10:41:01.156755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.165 ms 00:20:26.494 [2024-09-28 10:41:01.156778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.494 [2024-09-28 10:41:01.159707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.494 [2024-09-28 10:41:01.159870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:26.494 [2024-09-28 10:41:01.159888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:20:26.494 [2024-09-28 10:41:01.159896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.494 [2024-09-28 10:41:01.165063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.494 [2024-09-28 10:41:01.165223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:26.494 [2024-09-28 10:41:01.165242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.035 ms 00:20:26.494 [2024-09-28 10:41:01.165251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.758 [2024-09-28 10:41:01.331507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.758 [2024-09-28 10:41:01.331560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:26.758 [2024-09-28 10:41:01.331574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 166.202 ms 00:20:26.758 [2024-09-28 10:41:01.331584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.758 [2024-09-28 10:41:01.334204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.758 [2024-09-28 10:41:01.334254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:26.758 [2024-09-28 10:41:01.334264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:20:26.758 [2024-09-28 10:41:01.334272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.758 [2024-09-28 10:41:01.336361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.758 [2024-09-28 10:41:01.336408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:26.758 [2024-09-28 10:41:01.336418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.044 ms 00:20:26.758 [2024-09-28 10:41:01.336425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.758 [2024-09-28 10:41:01.338146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.758 [2024-09-28 10:41:01.338207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:26.758 [2024-09-28 10:41:01.338217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:20:26.758 [2024-09-28 10:41:01.338224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.758 [2024-09-28 10:41:01.339924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.758 [2024-09-28 10:41:01.340122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:26.758 [2024-09-28 10:41:01.340140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:20:26.758 [2024-09-28 10:41:01.340149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.758 [2024-09-28 10:41:01.340184] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:26.758 [2024-09-28 10:41:01.340198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107520 / 261120 wr_cnt: 1 state: open 00:20:26.758 [2024-09-28 10:41:01.340209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:26.758 [2024-09-28 10:41:01.340545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.340995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.341003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:26.759 [2024-09-28 10:41:01.341019] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:26.759 [2024-09-28 10:41:01.341034] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5266b08-7571-4e39-9a75-9919ffbc7afd 00:20:26.759 [2024-09-28 10:41:01.341043] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107520 00:20:26.759 [2024-09-28 10:41:01.341059] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108480 00:20:26.759 [2024-09-28 10:41:01.341074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107520 00:20:26.759 [2024-09-28 10:41:01.341083] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:20:26.759 [2024-09-28 10:41:01.341090] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:26.759 [2024-09-28 10:41:01.341102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:26.759 [2024-09-28 10:41:01.341110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:26.759 [2024-09-28 10:41:01.341117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:26.759 [2024-09-28 10:41:01.341124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:26.759 [2024-09-28 10:41:01.341131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.759 [2024-09-28 10:41:01.341139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:26.759 [2024-09-28 10:41:01.341148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:20:26.759 [2024-09-28 10:41:01.341156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.759 [2024-09-28 10:41:01.343449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.759 [2024-09-28 10:41:01.343520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:26.759 [2024-09-28 10:41:01.343534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:20:26.759 [2024-09-28 10:41:01.343542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.759 [2024-09-28 10:41:01.343668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.759 [2024-09-28 10:41:01.343687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:26.759 [2024-09-28 10:41:01.343696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:20:26.759 [2024-09-28 10:41:01.343704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.759 [2024-09-28 10:41:01.350513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.759 [2024-09-28 10:41:01.350678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.759 [2024-09-28 10:41:01.350698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.759 [2024-09-28 10:41:01.350707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.759 [2024-09-28 10:41:01.350773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.759 [2024-09-28 10:41:01.350782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.759 [2024-09-28 10:41:01.350791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.759 [2024-09-28 10:41:01.350798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.759 [2024-09-28 10:41:01.350874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.759 [2024-09-28 10:41:01.350886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.759 [2024-09-28 10:41:01.350895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.760 [2024-09-28 10:41:01.350903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.760 [2024-09-28 10:41:01.350918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.760 [2024-09-28 10:41:01.350926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.760 [2024-09-28 10:41:01.350934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.760 [2024-09-28 10:41:01.350942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.760 [2024-09-28 10:41:01.363674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.760 [2024-09-28 10:41:01.363723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.760 [2024-09-28 10:41:01.363734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.760 [2024-09-28 10:41:01.363742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.760 [2024-09-28 10:41:01.373619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.760 [2024-09-28 10:41:01.373789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.760 [2024-09-28 10:41:01.373806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.760 [2024-09-28 10:41:01.373815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.760 [2024-09-28 10:41:01.373860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.760 [2024-09-28 10:41:01.373876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:26.760 [2024-09-28 10:41:01.373884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.760 [2024-09-28 10:41:01.373892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.760 [2024-09-28 10:41:01.373927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.760 [2024-09-28 10:41:01.373936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:26.760 [2024-09-28 10:41:01.373944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.760 [2024-09-28 10:41:01.373952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.760 [2024-09-28 10:41:01.374049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.760 [2024-09-28 10:41:01.374059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:26.760 [2024-09-28 10:41:01.374071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.760 [2024-09-28 10:41:01.374079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.760 [2024-09-28 10:41:01.374109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.760 [2024-09-28 10:41:01.374118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:26.760 [2024-09-28 10:41:01.374126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.760 [2024-09-28 10:41:01.374134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.760 [2024-09-28 10:41:01.374174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.760 [2024-09-28 10:41:01.374183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:26.760 [2024-09-28 10:41:01.374195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.760 [2024-09-28 10:41:01.374203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.760 [2024-09-28 10:41:01.374246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.760 [2024-09-28 10:41:01.374260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:26.760 [2024-09-28 10:41:01.374269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.760 [2024-09-28 10:41:01.374280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.760 [2024-09-28 10:41:01.374412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.172 ms, result 0 00:20:27.703 00:20:27.703 00:20:27.703 10:41:02 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:27.703 [2024-09-28 10:41:02.211483] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:20:27.703 [2024-09-28 10:41:02.211642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89293 ] 00:20:27.703 [2024-09-28 10:41:02.344838] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:27.703 [2024-09-28 10:41:02.365082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.703 [2024-09-28 10:41:02.413741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.967 [2024-09-28 10:41:02.523477] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.967 [2024-09-28 10:41:02.523558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.967 [2024-09-28 10:41:02.685415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.967 [2024-09-28 10:41:02.685651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:27.967 [2024-09-28 10:41:02.685676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:27.967 [2024-09-28 10:41:02.685693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.967 [2024-09-28 10:41:02.685762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.967 [2024-09-28 10:41:02.685773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.967 [2024-09-28 10:41:02.685782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:27.967 [2024-09-28 10:41:02.685790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.967 [2024-09-28 10:41:02.685815] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:27.967 [2024-09-28 10:41:02.686097] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:27.967 [2024-09-28 10:41:02.686114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.967 [2024-09-28 10:41:02.686122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.967 [2024-09-28 10:41:02.686132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:20:27.967 [2024-09-28 10:41:02.686146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.967 [2024-09-28 10:41:02.688030] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:27.967 [2024-09-28 10:41:02.691670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.967 [2024-09-28 10:41:02.691731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:27.967 [2024-09-28 10:41:02.691745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.642 ms 00:20:27.967 [2024-09-28 10:41:02.691753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.967 [2024-09-28 10:41:02.691831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.967 [2024-09-28 10:41:02.691840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:27.967 [2024-09-28 10:41:02.691849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:27.967 [2024-09-28 10:41:02.691856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.967 [2024-09-28 10:41:02.700632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.967 [2024-09-28 10:41:02.700680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.967 [2024-09-28 10:41:02.700691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.731 ms 00:20:27.967 [2024-09-28 10:41:02.700702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.967 [2024-09-28 10:41:02.700785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.967 [2024-09-28 10:41:02.700794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.967 [2024-09-28 10:41:02.700803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:27.967 [2024-09-28 10:41:02.700811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.967 [2024-09-28 10:41:02.700873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.967 [2024-09-28 10:41:02.700886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:27.967 [2024-09-28 10:41:02.700895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:27.967 [2024-09-28 10:41:02.700907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.967 [2024-09-28 10:41:02.700934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:27.967 [2024-09-28 10:41:02.703175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.967 [2024-09-28 10:41:02.703351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.967 [2024-09-28 10:41:02.703369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:20:27.967 [2024-09-28 10:41:02.703376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.967 [2024-09-28 10:41:02.703414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.967 [2024-09-28 10:41:02.703422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:27.967 [2024-09-28 10:41:02.703438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:27.967 [2024-09-28 10:41:02.703449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.967 [2024-09-28 10:41:02.703478] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:27.967 [2024-09-28 10:41:02.703498] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:27.967 [2024-09-28 10:41:02.703535] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:27.968 [2024-09-28 10:41:02.703554] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:27.968 [2024-09-28 10:41:02.703663] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:27.968 [2024-09-28 10:41:02.703673] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:27.968 [2024-09-28 10:41:02.703684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:27.968 [2024-09-28 10:41:02.703697] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:27.968 [2024-09-28 10:41:02.703706] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:27.968 [2024-09-28 10:41:02.703713] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:27.968 [2024-09-28 10:41:02.703720] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:27.968 [2024-09-28 10:41:02.703727] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:27.968 [2024-09-28 10:41:02.703734] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:27.968 [2024-09-28 10:41:02.703742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.968 [2024-09-28 10:41:02.703749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:27.968 [2024-09-28 10:41:02.703761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:27.968 [2024-09-28 10:41:02.703768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.968 [2024-09-28 10:41:02.703851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.968 [2024-09-28 10:41:02.703867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:27.968 [2024-09-28 10:41:02.703874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:27.968 [2024-09-28 10:41:02.703882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.968 [2024-09-28 10:41:02.704005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:27.968 [2024-09-28 10:41:02.704017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:27.968 [2024-09-28 10:41:02.704027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:27.968 [2024-09-28 10:41:02.704036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:27.968 [2024-09-28 10:41:02.704059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:27.968 [2024-09-28 10:41:02.704076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:27.968 [2024-09-28 10:41:02.704094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:27.968 [2024-09-28 10:41:02.704111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:27.968 [2024-09-28 10:41:02.704119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:27.968 [2024-09-28 10:41:02.704127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:27.968 [2024-09-28 10:41:02.704138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:27.968 [2024-09-28 10:41:02.704146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:27.968 [2024-09-28 10:41:02.704155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:27.968 [2024-09-28 10:41:02.704170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:27.968 [2024-09-28 10:41:02.704177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:27.968 [2024-09-28 10:41:02.704193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.968 [2024-09-28 10:41:02.704208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:27.968 [2024-09-28 10:41:02.704216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.968 [2024-09-28 10:41:02.704231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:27.968 [2024-09-28 10:41:02.704239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.968 [2024-09-28 10:41:02.704261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:27.968 [2024-09-28 10:41:02.704272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.968 [2024-09-28 10:41:02.704287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:27.968 [2024-09-28 10:41:02.704294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:27.968 [2024-09-28 10:41:02.704310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:27.968 [2024-09-28 10:41:02.704317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:27.968 [2024-09-28 10:41:02.704324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:27.968 [2024-09-28 10:41:02.704334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:27.968 [2024-09-28 10:41:02.704344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:27.968 [2024-09-28 10:41:02.704352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:27.968 [2024-09-28 10:41:02.704367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:27.968 [2024-09-28 10:41:02.704374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704382] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:27.968 [2024-09-28 10:41:02.704391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:27.968 [2024-09-28 10:41:02.704404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:27.968 [2024-09-28 10:41:02.704413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.968 [2024-09-28 10:41:02.704421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:27.968 [2024-09-28 10:41:02.704429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:27.968 [2024-09-28 10:41:02.704436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:27.968 [2024-09-28 10:41:02.704443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:27.968 [2024-09-28 10:41:02.704449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:27.968 [2024-09-28 10:41:02.704456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:27.968 [2024-09-28 10:41:02.704464] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:27.968 [2024-09-28 10:41:02.704473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:27.968 [2024-09-28 10:41:02.704486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:27.968 [2024-09-28 10:41:02.704493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:27.968 [2024-09-28 10:41:02.704499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:27.968 [2024-09-28 10:41:02.704507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:27.968 [2024-09-28 10:41:02.704514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:27.968 [2024-09-28 10:41:02.704521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:27.968 [2024-09-28 10:41:02.704530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:27.968 [2024-09-28 10:41:02.704536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:27.968 [2024-09-28 10:41:02.704544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:27.968 [2024-09-28 10:41:02.704551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:27.968 [2024-09-28 10:41:02.704557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:27.968 [2024-09-28 10:41:02.704565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:27.968 [2024-09-28 10:41:02.704572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:27.968 [2024-09-28 10:41:02.704580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:27.968 [2024-09-28 10:41:02.704588] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:27.968 [2024-09-28 10:41:02.704597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:27.968 [2024-09-28 10:41:02.704605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:27.968 [2024-09-28 10:41:02.704613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:27.968 [2024-09-28 10:41:02.704620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:27.968 [2024-09-28 10:41:02.704628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:27.968 [2024-09-28 10:41:02.704636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.968 [2024-09-28 10:41:02.704643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:27.968 [2024-09-28 10:41:02.704654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:20:27.968 [2024-09-28 10:41:02.704661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.968 [2024-09-28 10:41:02.726609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.968 [2024-09-28 10:41:02.726677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.969 [2024-09-28 10:41:02.726691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.896 ms 00:20:27.969 [2024-09-28 10:41:02.726704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.969 [2024-09-28 10:41:02.726803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.969 [2024-09-28 10:41:02.726813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:27.969 [2024-09-28 10:41:02.726822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:27.969 [2024-09-28 10:41:02.726830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.969 [2024-09-28 10:41:02.738173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.969 [2024-09-28 10:41:02.738216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.969 [2024-09-28 10:41:02.738228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.271 ms 00:20:27.969 [2024-09-28 10:41:02.738236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.969 [2024-09-28 10:41:02.738273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.969 [2024-09-28 10:41:02.738282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.969 [2024-09-28 10:41:02.738291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:27.969 [2024-09-28 10:41:02.738299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.969 [2024-09-28 10:41:02.738821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.969 [2024-09-28 10:41:02.738848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.969 [2024-09-28 10:41:02.738859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:20:27.969 [2024-09-28 10:41:02.738868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.969 [2024-09-28 10:41:02.739048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.969 [2024-09-28 10:41:02.739064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.969 [2024-09-28 10:41:02.739079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:20:27.969 [2024-09-28 10:41:02.739088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.746006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.746037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.230 [2024-09-28 10:41:02.746055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.891 ms 00:20:28.230 [2024-09-28 10:41:02.746064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.749705] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:28.230 [2024-09-28 10:41:02.749742] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:28.230 [2024-09-28 10:41:02.749764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.749772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:28.230 [2024-09-28 10:41:02.749781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.593 ms 00:20:28.230 [2024-09-28 10:41:02.749796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.766229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.766291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:28.230 [2024-09-28 10:41:02.766306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.381 ms 00:20:28.230 [2024-09-28 10:41:02.766321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.769129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.769164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:28.230 [2024-09-28 10:41:02.769175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.742 ms 00:20:28.230 [2024-09-28 10:41:02.769183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.771807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.771845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:28.230 [2024-09-28 10:41:02.771855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:20:28.230 [2024-09-28 10:41:02.771863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.772225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.772241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:28.230 [2024-09-28 10:41:02.772251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:28.230 [2024-09-28 10:41:02.772262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.794496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.794548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:28.230 [2024-09-28 10:41:02.794560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.211 ms 00:20:28.230 [2024-09-28 10:41:02.794569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.803063] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:28.230 [2024-09-28 10:41:02.806324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.806358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:28.230 [2024-09-28 10:41:02.806378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.694 ms 00:20:28.230 [2024-09-28 10:41:02.806386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.806465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.806479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:28.230 [2024-09-28 10:41:02.806489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:28.230 [2024-09-28 10:41:02.806497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.808234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.808271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:28.230 [2024-09-28 10:41:02.808281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:20:28.230 [2024-09-28 10:41:02.808291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.808319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.808327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:28.230 [2024-09-28 10:41:02.808336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:28.230 [2024-09-28 10:41:02.808344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.808383] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:28.230 [2024-09-28 10:41:02.808393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.808401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:28.230 [2024-09-28 10:41:02.808412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:28.230 [2024-09-28 10:41:02.808421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.813681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.813731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:28.230 [2024-09-28 10:41:02.813742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.242 ms 00:20:28.230 [2024-09-28 10:41:02.813751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.813839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.230 [2024-09-28 10:41:02.813853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:28.230 [2024-09-28 10:41:02.813862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:28.230 [2024-09-28 10:41:02.813874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.230 [2024-09-28 10:41:02.815082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.156 ms, result 0 00:21:40.099  Copying: 13/1024 [MB] (13 MBps) Copying: 28/1024 [MB] (15 MBps) Copying: 41/1024 [MB] (12 MBps) Copying: 58/1024 [MB] (17 MBps) Copying: 73/1024 [MB] (15 MBps) Copying: 88/1024 [MB] (14 MBps) Copying: 104/1024 [MB] (15 MBps) Copying: 116/1024 [MB] (11 MBps) Copying: 126/1024 [MB] (10 MBps) Copying: 137/1024 [MB] (10 MBps) Copying: 147/1024 [MB] (10 MBps) Copying: 169/1024 [MB] (21 MBps) Copying: 179/1024 [MB] (10 MBps) Copying: 190/1024 [MB] (10 MBps) Copying: 201/1024 [MB] (11 MBps) Copying: 214/1024 [MB] (12 MBps) Copying: 233/1024 [MB] (19 MBps) Copying: 244/1024 [MB] (10 MBps) Copying: 254/1024 [MB] (10 MBps) Copying: 266/1024 [MB] (11 MBps) Copying: 285/1024 [MB] (19 MBps) Copying: 298/1024 [MB] (12 MBps) Copying: 310/1024 [MB] (12 MBps) Copying: 321/1024 [MB] (11 MBps) Copying: 332/1024 [MB] (11 MBps) Copying: 343/1024 [MB] (10 MBps) Copying: 354/1024 [MB] (10 MBps) Copying: 366/1024 [MB] (11 MBps) Copying: 377/1024 [MB] (11 MBps) Copying: 390/1024 [MB] (12 MBps) Copying: 402/1024 [MB] (11 MBps) Copying: 418/1024 [MB] (16 MBps) Copying: 431/1024 [MB] (13 MBps) Copying: 446/1024 [MB] (14 MBps) Copying: 467/1024 [MB] (20 MBps) Copying: 484/1024 [MB] (17 MBps) Copying: 498/1024 [MB] (13 MBps) Copying: 512/1024 [MB] (14 MBps) Copying: 533/1024 [MB] (20 MBps) Copying: 553/1024 [MB] (20 MBps) Copying: 571/1024 [MB] (18 MBps) Copying: 584/1024 [MB] (13 MBps) Copying: 602/1024 [MB] (18 MBps) Copying: 618/1024 [MB] (15 MBps) Copying: 628/1024 [MB] (10 MBps) Copying: 639/1024 [MB] (10 MBps) Copying: 650/1024 [MB] (10 MBps) Copying: 663/1024 [MB] (13 MBps) Copying: 683/1024 [MB] (20 MBps) Copying: 699/1024 [MB] (15 MBps) Copying: 711/1024 [MB] (12 MBps) Copying: 722/1024 [MB] (10 MBps) Copying: 740/1024 [MB] (18 MBps) Copying: 760/1024 [MB] (19 MBps) Copying: 776/1024 [MB] (16 MBps) Copying: 796/1024 [MB] (19 MBps) Copying: 811/1024 [MB] (15 MBps) Copying: 828/1024 [MB] (17 MBps) Copying: 845/1024 [MB] (17 MBps) Copying: 856/1024 [MB] (10 MBps) Copying: 868/1024 [MB] (11 MBps) Copying: 885/1024 [MB] (17 MBps) Copying: 900/1024 [MB] (15 MBps) Copying: 911/1024 [MB] (10 MBps) Copying: 923/1024 [MB] (11 MBps) Copying: 937/1024 [MB] (14 MBps) Copying: 949/1024 [MB] (11 MBps) Copying: 959/1024 [MB] (10 MBps) Copying: 970/1024 [MB] (10 MBps) Copying: 981/1024 [MB] (11 MBps) Copying: 1007/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 14 MBps)[2024-09-28 10:42:14.861843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.099 [2024-09-28 10:42:14.861981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:40.099 [2024-09-28 10:42:14.862010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:40.099 [2024-09-28 10:42:14.862039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.099 [2024-09-28 10:42:14.862079] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:40.099 [2024-09-28 10:42:14.863033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.099 [2024-09-28 10:42:14.863099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:40.099 [2024-09-28 10:42:14.863124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:21:40.099 [2024-09-28 10:42:14.863139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.099 [2024-09-28 10:42:14.863558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.099 [2024-09-28 10:42:14.863598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:40.099 [2024-09-28 10:42:14.863614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:21:40.099 [2024-09-28 10:42:14.863630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.099 [2024-09-28 10:42:14.873113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.099 [2024-09-28 10:42:14.873180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:40.099 [2024-09-28 10:42:14.873193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.454 ms 00:21:40.099 [2024-09-28 10:42:14.873202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.359 [2024-09-28 10:42:14.879566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.359 [2024-09-28 10:42:14.879608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:40.359 [2024-09-28 10:42:14.879621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.302 ms 00:21:40.359 [2024-09-28 10:42:14.879629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.359 [2024-09-28 10:42:14.882584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.359 [2024-09-28 10:42:14.882636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:40.359 [2024-09-28 10:42:14.882647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.901 ms 00:21:40.359 [2024-09-28 10:42:14.882655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.359 [2024-09-28 10:42:14.887469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.360 [2024-09-28 10:42:14.887520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:40.360 [2024-09-28 10:42:14.887543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.767 ms 00:21:40.360 [2024-09-28 10:42:14.887556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.620 [2024-09-28 10:42:15.255699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.620 [2024-09-28 10:42:15.255751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:40.620 [2024-09-28 10:42:15.255766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 368.091 ms 00:21:40.621 [2024-09-28 10:42:15.255778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.621 [2024-09-28 10:42:15.258745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.621 [2024-09-28 10:42:15.258791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:40.621 [2024-09-28 10:42:15.258801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.949 ms 00:21:40.621 [2024-09-28 10:42:15.258809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.621 [2024-09-28 10:42:15.261557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.621 [2024-09-28 10:42:15.261605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:40.621 [2024-09-28 10:42:15.261617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:21:40.621 [2024-09-28 10:42:15.261625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.621 [2024-09-28 10:42:15.263935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.621 [2024-09-28 10:42:15.263996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:40.621 [2024-09-28 10:42:15.264020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.267 ms 00:21:40.621 [2024-09-28 10:42:15.264028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.621 [2024-09-28 10:42:15.266314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.621 [2024-09-28 10:42:15.266361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:40.621 [2024-09-28 10:42:15.266372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.215 ms 00:21:40.621 [2024-09-28 10:42:15.266381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.621 [2024-09-28 10:42:15.266420] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:40.621 [2024-09-28 10:42:15.266436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:21:40.621 [2024-09-28 10:42:15.266448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.266994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:40.621 [2024-09-28 10:42:15.267169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:40.622 [2024-09-28 10:42:15.267290] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:40.622 [2024-09-28 10:42:15.267298] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5266b08-7571-4e39-9a75-9919ffbc7afd 00:21:40.622 [2024-09-28 10:42:15.267322] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:21:40.622 [2024-09-28 10:42:15.267334] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 25280 00:21:40.622 [2024-09-28 10:42:15.267347] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 24320 00:21:40.622 [2024-09-28 10:42:15.267356] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0395 00:21:40.622 [2024-09-28 10:42:15.267364] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:40.622 [2024-09-28 10:42:15.267372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:40.622 [2024-09-28 10:42:15.267380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:40.622 [2024-09-28 10:42:15.267387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:40.622 [2024-09-28 10:42:15.267394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:40.622 [2024-09-28 10:42:15.267401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.622 [2024-09-28 10:42:15.267409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:40.622 [2024-09-28 10:42:15.267418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:21:40.622 [2024-09-28 10:42:15.267430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.269692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.622 [2024-09-28 10:42:15.269731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:40.622 [2024-09-28 10:42:15.269743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:21:40.622 [2024-09-28 10:42:15.269751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.269869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.622 [2024-09-28 10:42:15.269878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:40.622 [2024-09-28 10:42:15.269897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:40.622 [2024-09-28 10:42:15.269910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.276803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.276856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:40.622 [2024-09-28 10:42:15.276877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.276886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.276954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.277019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:40.622 [2024-09-28 10:42:15.277028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.277035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.277111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.277123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:40.622 [2024-09-28 10:42:15.277132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.277141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.277156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.277166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:40.622 [2024-09-28 10:42:15.277179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.277188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.291214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.291267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.622 [2024-09-28 10:42:15.291288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.291297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.302203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.302255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:40.622 [2024-09-28 10:42:15.302270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.302278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.302346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.302355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:40.622 [2024-09-28 10:42:15.302364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.302372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.302410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.302419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:40.622 [2024-09-28 10:42:15.302428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.302440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.302511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.302524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:40.622 [2024-09-28 10:42:15.302536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.302544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.302572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.302581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:40.622 [2024-09-28 10:42:15.302592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.302604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.302642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.302654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:40.622 [2024-09-28 10:42:15.302662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.302670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.302711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.622 [2024-09-28 10:42:15.302728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:40.622 [2024-09-28 10:42:15.302736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.622 [2024-09-28 10:42:15.302745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.622 [2024-09-28 10:42:15.302895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 441.016 ms, result 0 00:21:40.884 00:21:40.884 00:21:40.884 10:42:15 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:43.435 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 87050 00:21:43.435 10:42:17 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 87050 ']' 00:21:43.435 10:42:17 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 87050 00:21:43.435 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (87050) - No such process 00:21:43.435 10:42:17 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 87050 is not found' 00:21:43.435 Process with pid 87050 is not found 00:21:43.435 Remove shared memory files 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:43.435 10:42:17 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:43.435 00:21:43.435 real 4m50.707s 00:21:43.435 user 4m37.888s 00:21:43.435 sys 0m12.223s 00:21:43.435 10:42:17 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:43.435 10:42:17 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:43.435 ************************************ 00:21:43.435 END TEST ftl_restore 00:21:43.435 ************************************ 00:21:43.435 10:42:18 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:43.435 10:42:18 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:43.435 10:42:18 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.435 10:42:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:43.435 ************************************ 00:21:43.435 START TEST ftl_dirty_shutdown 00:21:43.435 ************************************ 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:43.435 * Looking for test storage... 00:21:43.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:43.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.435 --rc genhtml_branch_coverage=1 00:21:43.435 --rc genhtml_function_coverage=1 00:21:43.435 --rc genhtml_legend=1 00:21:43.435 --rc geninfo_all_blocks=1 00:21:43.435 --rc geninfo_unexecuted_blocks=1 00:21:43.435 00:21:43.435 ' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:43.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.435 --rc genhtml_branch_coverage=1 00:21:43.435 --rc genhtml_function_coverage=1 00:21:43.435 --rc genhtml_legend=1 00:21:43.435 --rc geninfo_all_blocks=1 00:21:43.435 --rc geninfo_unexecuted_blocks=1 00:21:43.435 00:21:43.435 ' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:43.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.435 --rc genhtml_branch_coverage=1 00:21:43.435 --rc genhtml_function_coverage=1 00:21:43.435 --rc genhtml_legend=1 00:21:43.435 --rc geninfo_all_blocks=1 00:21:43.435 --rc geninfo_unexecuted_blocks=1 00:21:43.435 00:21:43.435 ' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:43.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.435 --rc genhtml_branch_coverage=1 00:21:43.435 --rc genhtml_function_coverage=1 00:21:43.435 --rc genhtml_legend=1 00:21:43.435 --rc geninfo_all_blocks=1 00:21:43.435 --rc geninfo_unexecuted_blocks=1 00:21:43.435 00:21:43.435 ' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:43.435 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=90135 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 90135 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 90135 ']' 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.436 10:42:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:43.698 [2024-09-28 10:42:18.275491] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:21:43.698 [2024-09-28 10:42:18.275634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90135 ] 00:21:43.698 [2024-09-28 10:42:18.408234] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:43.699 [2024-09-28 10:42:18.429104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.961 [2024-09-28 10:42:18.479278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.532 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.532 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:21:44.532 10:42:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:44.532 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:44.532 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:44.532 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:44.532 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:44.532 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:44.791 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:44.791 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:44.791 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:44.791 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:44.791 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:44.791 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:44.791 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:44.791 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:45.051 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:45.051 { 00:21:45.051 "name": "nvme0n1", 00:21:45.051 "aliases": [ 00:21:45.051 "a4f5fa62-c225-4936-9628-7fe95f181cee" 00:21:45.051 ], 00:21:45.051 "product_name": "NVMe disk", 00:21:45.051 "block_size": 4096, 00:21:45.051 "num_blocks": 1310720, 00:21:45.051 "uuid": "a4f5fa62-c225-4936-9628-7fe95f181cee", 00:21:45.051 "numa_id": -1, 00:21:45.051 "assigned_rate_limits": { 00:21:45.051 "rw_ios_per_sec": 0, 00:21:45.051 "rw_mbytes_per_sec": 0, 00:21:45.051 "r_mbytes_per_sec": 0, 00:21:45.051 "w_mbytes_per_sec": 0 00:21:45.051 }, 00:21:45.051 "claimed": true, 00:21:45.051 "claim_type": "read_many_write_one", 00:21:45.051 "zoned": false, 00:21:45.051 "supported_io_types": { 00:21:45.051 "read": true, 00:21:45.051 "write": true, 00:21:45.051 "unmap": true, 00:21:45.051 "flush": true, 00:21:45.051 "reset": true, 00:21:45.051 "nvme_admin": true, 00:21:45.051 "nvme_io": true, 00:21:45.051 "nvme_io_md": false, 00:21:45.051 "write_zeroes": true, 00:21:45.051 "zcopy": false, 00:21:45.051 "get_zone_info": false, 00:21:45.051 "zone_management": false, 00:21:45.051 "zone_append": false, 00:21:45.051 "compare": true, 00:21:45.051 "compare_and_write": false, 00:21:45.051 "abort": true, 00:21:45.051 "seek_hole": false, 00:21:45.051 "seek_data": false, 00:21:45.051 "copy": true, 00:21:45.051 "nvme_iov_md": false 00:21:45.051 }, 00:21:45.051 "driver_specific": { 00:21:45.051 "nvme": [ 00:21:45.051 { 00:21:45.051 "pci_address": "0000:00:11.0", 00:21:45.051 "trid": { 00:21:45.051 "trtype": "PCIe", 00:21:45.051 "traddr": "0000:00:11.0" 00:21:45.051 }, 00:21:45.051 "ctrlr_data": { 00:21:45.051 "cntlid": 0, 00:21:45.051 "vendor_id": "0x1b36", 00:21:45.051 "model_number": "QEMU NVMe Ctrl", 00:21:45.051 "serial_number": "12341", 00:21:45.051 "firmware_revision": "8.0.0", 00:21:45.051 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:45.051 "oacs": { 00:21:45.051 "security": 0, 00:21:45.051 "format": 1, 00:21:45.051 "firmware": 0, 00:21:45.051 "ns_manage": 1 00:21:45.051 }, 00:21:45.051 "multi_ctrlr": false, 00:21:45.051 "ana_reporting": false 00:21:45.051 }, 00:21:45.051 "vs": { 00:21:45.051 "nvme_version": "1.4" 00:21:45.051 }, 00:21:45.051 "ns_data": { 00:21:45.051 "id": 1, 00:21:45.051 "can_share": false 00:21:45.051 } 00:21:45.051 } 00:21:45.051 ], 00:21:45.051 "mp_policy": "active_passive" 00:21:45.051 } 00:21:45.051 } 00:21:45.051 ]' 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:45.052 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:45.313 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=1a4d33cd-53c4-458f-b073-158522f05e8c 00:21:45.313 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:45.313 10:42:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a4d33cd-53c4-458f-b073-158522f05e8c 00:21:45.572 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:45.572 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=70a81290-b884-459c-a83b-aeb626675223 00:21:45.572 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 70a81290-b884-459c-a83b-aeb626675223 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:45.833 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:46.095 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:46.095 { 00:21:46.096 "name": "b7a13f3f-a50d-44d5-90fc-635c1edeace1", 00:21:46.096 "aliases": [ 00:21:46.096 "lvs/nvme0n1p0" 00:21:46.096 ], 00:21:46.096 "product_name": "Logical Volume", 00:21:46.096 "block_size": 4096, 00:21:46.096 "num_blocks": 26476544, 00:21:46.096 "uuid": "b7a13f3f-a50d-44d5-90fc-635c1edeace1", 00:21:46.096 "assigned_rate_limits": { 00:21:46.096 "rw_ios_per_sec": 0, 00:21:46.096 "rw_mbytes_per_sec": 0, 00:21:46.096 "r_mbytes_per_sec": 0, 00:21:46.096 "w_mbytes_per_sec": 0 00:21:46.096 }, 00:21:46.096 "claimed": false, 00:21:46.096 "zoned": false, 00:21:46.096 "supported_io_types": { 00:21:46.096 "read": true, 00:21:46.096 "write": true, 00:21:46.096 "unmap": true, 00:21:46.096 "flush": false, 00:21:46.096 "reset": true, 00:21:46.096 "nvme_admin": false, 00:21:46.096 "nvme_io": false, 00:21:46.096 "nvme_io_md": false, 00:21:46.096 "write_zeroes": true, 00:21:46.096 "zcopy": false, 00:21:46.096 "get_zone_info": false, 00:21:46.096 "zone_management": false, 00:21:46.096 "zone_append": false, 00:21:46.096 "compare": false, 00:21:46.096 "compare_and_write": false, 00:21:46.096 "abort": false, 00:21:46.096 "seek_hole": true, 00:21:46.096 "seek_data": true, 00:21:46.096 "copy": false, 00:21:46.096 "nvme_iov_md": false 00:21:46.096 }, 00:21:46.096 "driver_specific": { 00:21:46.096 "lvol": { 00:21:46.096 "lvol_store_uuid": "70a81290-b884-459c-a83b-aeb626675223", 00:21:46.096 "base_bdev": "nvme0n1", 00:21:46.096 "thin_provision": true, 00:21:46.096 "num_allocated_clusters": 0, 00:21:46.096 "snapshot": false, 00:21:46.096 "clone": false, 00:21:46.096 "esnap_clone": false 00:21:46.096 } 00:21:46.096 } 00:21:46.096 } 00:21:46.096 ]' 00:21:46.096 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:46.096 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:46.096 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:46.096 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:46.096 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:46.096 10:42:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:46.096 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:46.096 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:46.096 10:42:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:46.357 10:42:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:46.357 10:42:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:46.357 10:42:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:46.357 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:46.357 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:46.357 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:46.357 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:46.357 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:46.618 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:46.619 { 00:21:46.619 "name": "b7a13f3f-a50d-44d5-90fc-635c1edeace1", 00:21:46.619 "aliases": [ 00:21:46.619 "lvs/nvme0n1p0" 00:21:46.619 ], 00:21:46.619 "product_name": "Logical Volume", 00:21:46.619 "block_size": 4096, 00:21:46.619 "num_blocks": 26476544, 00:21:46.619 "uuid": "b7a13f3f-a50d-44d5-90fc-635c1edeace1", 00:21:46.619 "assigned_rate_limits": { 00:21:46.619 "rw_ios_per_sec": 0, 00:21:46.619 "rw_mbytes_per_sec": 0, 00:21:46.619 "r_mbytes_per_sec": 0, 00:21:46.619 "w_mbytes_per_sec": 0 00:21:46.619 }, 00:21:46.619 "claimed": false, 00:21:46.619 "zoned": false, 00:21:46.619 "supported_io_types": { 00:21:46.619 "read": true, 00:21:46.619 "write": true, 00:21:46.619 "unmap": true, 00:21:46.619 "flush": false, 00:21:46.619 "reset": true, 00:21:46.619 "nvme_admin": false, 00:21:46.619 "nvme_io": false, 00:21:46.619 "nvme_io_md": false, 00:21:46.619 "write_zeroes": true, 00:21:46.619 "zcopy": false, 00:21:46.619 "get_zone_info": false, 00:21:46.619 "zone_management": false, 00:21:46.619 "zone_append": false, 00:21:46.619 "compare": false, 00:21:46.619 "compare_and_write": false, 00:21:46.619 "abort": false, 00:21:46.619 "seek_hole": true, 00:21:46.619 "seek_data": true, 00:21:46.619 "copy": false, 00:21:46.619 "nvme_iov_md": false 00:21:46.619 }, 00:21:46.619 "driver_specific": { 00:21:46.619 "lvol": { 00:21:46.619 "lvol_store_uuid": "70a81290-b884-459c-a83b-aeb626675223", 00:21:46.619 "base_bdev": "nvme0n1", 00:21:46.619 "thin_provision": true, 00:21:46.619 "num_allocated_clusters": 0, 00:21:46.619 "snapshot": false, 00:21:46.619 "clone": false, 00:21:46.619 "esnap_clone": false 00:21:46.619 } 00:21:46.619 } 00:21:46.619 } 00:21:46.619 ]' 00:21:46.619 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:46.619 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:46.619 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:46.619 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:46.619 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:46.619 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:46.619 10:42:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:46.619 10:42:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:46.880 10:42:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:46.880 10:42:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:46.880 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:46.880 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:46.880 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:46.880 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:46.880 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b7a13f3f-a50d-44d5-90fc-635c1edeace1 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:47.139 { 00:21:47.139 "name": "b7a13f3f-a50d-44d5-90fc-635c1edeace1", 00:21:47.139 "aliases": [ 00:21:47.139 "lvs/nvme0n1p0" 00:21:47.139 ], 00:21:47.139 "product_name": "Logical Volume", 00:21:47.139 "block_size": 4096, 00:21:47.139 "num_blocks": 26476544, 00:21:47.139 "uuid": "b7a13f3f-a50d-44d5-90fc-635c1edeace1", 00:21:47.139 "assigned_rate_limits": { 00:21:47.139 "rw_ios_per_sec": 0, 00:21:47.139 "rw_mbytes_per_sec": 0, 00:21:47.139 "r_mbytes_per_sec": 0, 00:21:47.139 "w_mbytes_per_sec": 0 00:21:47.139 }, 00:21:47.139 "claimed": false, 00:21:47.139 "zoned": false, 00:21:47.139 "supported_io_types": { 00:21:47.139 "read": true, 00:21:47.139 "write": true, 00:21:47.139 "unmap": true, 00:21:47.139 "flush": false, 00:21:47.139 "reset": true, 00:21:47.139 "nvme_admin": false, 00:21:47.139 "nvme_io": false, 00:21:47.139 "nvme_io_md": false, 00:21:47.139 "write_zeroes": true, 00:21:47.139 "zcopy": false, 00:21:47.139 "get_zone_info": false, 00:21:47.139 "zone_management": false, 00:21:47.139 "zone_append": false, 00:21:47.139 "compare": false, 00:21:47.139 "compare_and_write": false, 00:21:47.139 "abort": false, 00:21:47.139 "seek_hole": true, 00:21:47.139 "seek_data": true, 00:21:47.139 "copy": false, 00:21:47.139 "nvme_iov_md": false 00:21:47.139 }, 00:21:47.139 "driver_specific": { 00:21:47.139 "lvol": { 00:21:47.139 "lvol_store_uuid": "70a81290-b884-459c-a83b-aeb626675223", 00:21:47.139 "base_bdev": "nvme0n1", 00:21:47.139 "thin_provision": true, 00:21:47.139 "num_allocated_clusters": 0, 00:21:47.139 "snapshot": false, 00:21:47.139 "clone": false, 00:21:47.139 "esnap_clone": false 00:21:47.139 } 00:21:47.139 } 00:21:47.139 } 00:21:47.139 ]' 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b7a13f3f-a50d-44d5-90fc-635c1edeace1 --l2p_dram_limit 10' 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:47.139 10:42:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b7a13f3f-a50d-44d5-90fc-635c1edeace1 --l2p_dram_limit 10 -c nvc0n1p0 00:21:47.398 [2024-09-28 10:42:21.966648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.398 [2024-09-28 10:42:21.966687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:47.398 [2024-09-28 10:42:21.966700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:47.398 [2024-09-28 10:42:21.966710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.398 [2024-09-28 10:42:21.966756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.398 [2024-09-28 10:42:21.966764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:47.398 [2024-09-28 10:42:21.966773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:47.398 [2024-09-28 10:42:21.966781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.398 [2024-09-28 10:42:21.966801] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:47.398 [2024-09-28 10:42:21.967315] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:47.398 [2024-09-28 10:42:21.967351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.398 [2024-09-28 10:42:21.967363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:47.399 [2024-09-28 10:42:21.967372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:21:47.399 [2024-09-28 10:42:21.967379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.399 [2024-09-28 10:42:21.967449] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3e3420ac-c58b-4945-9d91-c714566e6f15 00:21:47.399 [2024-09-28 10:42:21.968429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.399 [2024-09-28 10:42:21.968458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:47.399 [2024-09-28 10:42:21.968466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:47.399 [2024-09-28 10:42:21.968479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.399 [2024-09-28 10:42:21.973245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.399 [2024-09-28 10:42:21.973273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:47.399 [2024-09-28 10:42:21.973282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.731 ms 00:21:47.399 [2024-09-28 10:42:21.973295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.399 [2024-09-28 10:42:21.973365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.399 [2024-09-28 10:42:21.973374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:47.399 [2024-09-28 10:42:21.973382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:47.399 [2024-09-28 10:42:21.973390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.399 [2024-09-28 10:42:21.973435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.399 [2024-09-28 10:42:21.973444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:47.399 [2024-09-28 10:42:21.973452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:47.399 [2024-09-28 10:42:21.973459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.399 [2024-09-28 10:42:21.973475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:47.399 [2024-09-28 10:42:21.974779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.399 [2024-09-28 10:42:21.974809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:47.399 [2024-09-28 10:42:21.974818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.306 ms 00:21:47.399 [2024-09-28 10:42:21.974826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.399 [2024-09-28 10:42:21.974859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.399 [2024-09-28 10:42:21.974865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:47.399 [2024-09-28 10:42:21.974875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:47.399 [2024-09-28 10:42:21.974881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.399 [2024-09-28 10:42:21.974895] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:47.399 [2024-09-28 10:42:21.975015] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:47.399 [2024-09-28 10:42:21.975026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:47.399 [2024-09-28 10:42:21.975034] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:47.399 [2024-09-28 10:42:21.975043] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975054] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975066] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:47.399 [2024-09-28 10:42:21.975072] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:47.399 [2024-09-28 10:42:21.975078] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:47.399 [2024-09-28 10:42:21.975085] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:47.399 [2024-09-28 10:42:21.975093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.399 [2024-09-28 10:42:21.975098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:47.399 [2024-09-28 10:42:21.975105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:21:47.399 [2024-09-28 10:42:21.975110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.399 [2024-09-28 10:42:21.975177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.399 [2024-09-28 10:42:21.975183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:47.399 [2024-09-28 10:42:21.975192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:47.399 [2024-09-28 10:42:21.975198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.399 [2024-09-28 10:42:21.975277] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:47.399 [2024-09-28 10:42:21.975284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:47.399 [2024-09-28 10:42:21.975291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:47.399 [2024-09-28 10:42:21.975310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:47.399 [2024-09-28 10:42:21.975329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:47.399 [2024-09-28 10:42:21.975341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:47.399 [2024-09-28 10:42:21.975346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:47.399 [2024-09-28 10:42:21.975353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:47.399 [2024-09-28 10:42:21.975359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:47.399 [2024-09-28 10:42:21.975366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:47.399 [2024-09-28 10:42:21.975371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:47.399 [2024-09-28 10:42:21.975382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:47.399 [2024-09-28 10:42:21.975400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:47.399 [2024-09-28 10:42:21.975417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:47.399 [2024-09-28 10:42:21.975436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:47.399 [2024-09-28 10:42:21.975456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:47.399 [2024-09-28 10:42:21.975477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:47.399 [2024-09-28 10:42:21.975492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:47.399 [2024-09-28 10:42:21.975497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:47.399 [2024-09-28 10:42:21.975504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:47.399 [2024-09-28 10:42:21.975510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:47.399 [2024-09-28 10:42:21.975517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:47.399 [2024-09-28 10:42:21.975523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:47.399 [2024-09-28 10:42:21.975535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:47.399 [2024-09-28 10:42:21.975542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975547] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:47.399 [2024-09-28 10:42:21.975556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:47.399 [2024-09-28 10:42:21.975566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.399 [2024-09-28 10:42:21.975580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:47.399 [2024-09-28 10:42:21.975587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:47.399 [2024-09-28 10:42:21.975593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:47.399 [2024-09-28 10:42:21.975600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:47.399 [2024-09-28 10:42:21.975606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:47.399 [2024-09-28 10:42:21.975613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:47.399 [2024-09-28 10:42:21.975621] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:47.399 [2024-09-28 10:42:21.975630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.399 [2024-09-28 10:42:21.975641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:47.399 [2024-09-28 10:42:21.975649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:47.400 [2024-09-28 10:42:21.975655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:47.400 [2024-09-28 10:42:21.975663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:47.400 [2024-09-28 10:42:21.975669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:47.400 [2024-09-28 10:42:21.975678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:47.400 [2024-09-28 10:42:21.975684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:47.400 [2024-09-28 10:42:21.975692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:47.400 [2024-09-28 10:42:21.975698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:47.400 [2024-09-28 10:42:21.975705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:47.400 [2024-09-28 10:42:21.975712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:47.400 [2024-09-28 10:42:21.975719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:47.400 [2024-09-28 10:42:21.975725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:47.400 [2024-09-28 10:42:21.975733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:47.400 [2024-09-28 10:42:21.975739] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:47.400 [2024-09-28 10:42:21.975747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.400 [2024-09-28 10:42:21.975754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:47.400 [2024-09-28 10:42:21.975761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:47.400 [2024-09-28 10:42:21.975768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:47.400 [2024-09-28 10:42:21.975775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:47.400 [2024-09-28 10:42:21.975781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.400 [2024-09-28 10:42:21.975791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:47.400 [2024-09-28 10:42:21.975798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:21:47.400 [2024-09-28 10:42:21.975809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.400 [2024-09-28 10:42:21.975836] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:47.400 [2024-09-28 10:42:21.975844] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:50.702 [2024-09-28 10:42:25.384019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.702 [2024-09-28 10:42:25.384118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:50.702 [2024-09-28 10:42:25.384140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3408.169 ms 00:21:50.702 [2024-09-28 10:42:25.384152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.702 [2024-09-28 10:42:25.398891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.702 [2024-09-28 10:42:25.398955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:50.702 [2024-09-28 10:42:25.398983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.618 ms 00:21:50.702 [2024-09-28 10:42:25.398999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.702 [2024-09-28 10:42:25.399125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.702 [2024-09-28 10:42:25.399143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:50.702 [2024-09-28 10:42:25.399153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:50.702 [2024-09-28 10:42:25.399164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.702 [2024-09-28 10:42:25.411350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.702 [2024-09-28 10:42:25.411406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:50.702 [2024-09-28 10:42:25.411418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.142 ms 00:21:50.702 [2024-09-28 10:42:25.411438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.702 [2024-09-28 10:42:25.411476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.702 [2024-09-28 10:42:25.411487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:50.702 [2024-09-28 10:42:25.411496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:50.702 [2024-09-28 10:42:25.411507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.702 [2024-09-28 10:42:25.412155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.702 [2024-09-28 10:42:25.412198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:50.702 [2024-09-28 10:42:25.412212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:21:50.702 [2024-09-28 10:42:25.412226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.702 [2024-09-28 10:42:25.412352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.702 [2024-09-28 10:42:25.412365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:50.703 [2024-09-28 10:42:25.412378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:50.703 [2024-09-28 10:42:25.412390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.703 [2024-09-28 10:42:25.436059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.703 [2024-09-28 10:42:25.436120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:50.703 [2024-09-28 10:42:25.436134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.645 ms 00:21:50.703 [2024-09-28 10:42:25.436145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.703 [2024-09-28 10:42:25.446240] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:50.703 [2024-09-28 10:42:25.450209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.703 [2024-09-28 10:42:25.450252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:50.703 [2024-09-28 10:42:25.450267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.950 ms 00:21:50.703 [2024-09-28 10:42:25.450275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.963 [2024-09-28 10:42:25.535627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.963 [2024-09-28 10:42:25.535696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:50.963 [2024-09-28 10:42:25.535718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.310 ms 00:21:50.963 [2024-09-28 10:42:25.535736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.963 [2024-09-28 10:42:25.535953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.963 [2024-09-28 10:42:25.535981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:50.963 [2024-09-28 10:42:25.535995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:21:50.964 [2024-09-28 10:42:25.536003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.964 [2024-09-28 10:42:25.542244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.964 [2024-09-28 10:42:25.542296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:50.964 [2024-09-28 10:42:25.542318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.194 ms 00:21:50.964 [2024-09-28 10:42:25.542327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.964 [2024-09-28 10:42:25.547664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.964 [2024-09-28 10:42:25.547715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:50.964 [2024-09-28 10:42:25.547730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.273 ms 00:21:50.964 [2024-09-28 10:42:25.547738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.964 [2024-09-28 10:42:25.548124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.964 [2024-09-28 10:42:25.548142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:50.964 [2024-09-28 10:42:25.548158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:21:50.964 [2024-09-28 10:42:25.548166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.964 [2024-09-28 10:42:25.590218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.964 [2024-09-28 10:42:25.590278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:50.964 [2024-09-28 10:42:25.590295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.020 ms 00:21:50.964 [2024-09-28 10:42:25.590308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.964 [2024-09-28 10:42:25.597721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.964 [2024-09-28 10:42:25.597779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:50.964 [2024-09-28 10:42:25.597794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.340 ms 00:21:50.964 [2024-09-28 10:42:25.597802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.964 [2024-09-28 10:42:25.603923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.964 [2024-09-28 10:42:25.603996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:50.964 [2024-09-28 10:42:25.604010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.062 ms 00:21:50.964 [2024-09-28 10:42:25.604017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.964 [2024-09-28 10:42:25.610347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.964 [2024-09-28 10:42:25.610396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:50.964 [2024-09-28 10:42:25.610413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.271 ms 00:21:50.964 [2024-09-28 10:42:25.610420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.964 [2024-09-28 10:42:25.610479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.964 [2024-09-28 10:42:25.610488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:50.964 [2024-09-28 10:42:25.610506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:50.964 [2024-09-28 10:42:25.610514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.964 [2024-09-28 10:42:25.610612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.964 [2024-09-28 10:42:25.610626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:50.964 [2024-09-28 10:42:25.610637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:50.964 [2024-09-28 10:42:25.610646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.964 [2024-09-28 10:42:25.612421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3645.245 ms, result 0 00:21:50.964 { 00:21:50.964 "name": "ftl0", 00:21:50.964 "uuid": "3e3420ac-c58b-4945-9d91-c714566e6f15" 00:21:50.964 } 00:21:50.964 10:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:50.964 10:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:51.225 10:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:51.225 10:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:51.225 10:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:51.486 /dev/nbd0 00:21:51.486 10:42:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:51.486 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:51.486 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:21:51.486 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:51.486 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:51.486 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:51.486 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:21:51.486 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:51.486 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:51.486 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:51.486 1+0 records in 00:21:51.487 1+0 records out 00:21:51.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573656 s, 7.1 MB/s 00:21:51.487 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:51.487 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:21:51.487 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:51.487 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:51.487 10:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:21:51.487 10:42:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:51.487 [2024-09-28 10:42:26.171850] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:21:51.487 [2024-09-28 10:42:26.171970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90280 ] 00:21:51.748 [2024-09-28 10:42:26.300030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:51.749 [2024-09-28 10:42:26.322162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.749 [2024-09-28 10:42:26.356640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.477  Copying: 186/1024 [MB] (186 MBps) Copying: 409/1024 [MB] (223 MBps) Copying: 669/1024 [MB] (259 MBps) Copying: 917/1024 [MB] (247 MBps) Copying: 1024/1024 [MB] (average 230 MBps) 00:21:56.477 00:21:56.477 10:42:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:58.378 10:42:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:58.378 [2024-09-28 10:42:32.994130] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:21:58.378 [2024-09-28 10:42:32.994230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90358 ] 00:21:58.378 [2024-09-28 10:42:33.119494] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:58.378 [2024-09-28 10:42:33.137229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.637 [2024-09-28 10:42:33.175773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.364  Copying: 17/1024 [MB] (17 MBps) Copying: 38/1024 [MB] (21 MBps) Copying: 59/1024 [MB] (20 MBps) Copying: 72/1024 [MB] (13 MBps) Copying: 83/1024 [MB] (11 MBps) Copying: 96/1024 [MB] (12 MBps) Copying: 106/1024 [MB] (10 MBps) Copying: 119/1024 [MB] (13 MBps) Copying: 131/1024 [MB] (11 MBps) Copying: 154/1024 [MB] (22 MBps) Copying: 170/1024 [MB] (16 MBps) Copying: 186/1024 [MB] (15 MBps) Copying: 203/1024 [MB] (17 MBps) Copying: 217/1024 [MB] (14 MBps) Copying: 233/1024 [MB] (16 MBps) Copying: 253/1024 [MB] (19 MBps) Copying: 264/1024 [MB] (11 MBps) Copying: 280/1024 [MB] (16 MBps) Copying: 296/1024 [MB] (15 MBps) Copying: 312/1024 [MB] (15 MBps) Copying: 329/1024 [MB] (17 MBps) Copying: 344/1024 [MB] (14 MBps) Copying: 357/1024 [MB] (12 MBps) Copying: 372/1024 [MB] (15 MBps) Copying: 389/1024 [MB] (17 MBps) Copying: 407/1024 [MB] (17 MBps) Copying: 423/1024 [MB] (16 MBps) Copying: 436/1024 [MB] (12 MBps) Copying: 452/1024 [MB] (16 MBps) Copying: 469/1024 [MB] (16 MBps) Copying: 482/1024 [MB] (12 MBps) Copying: 497/1024 [MB] (15 MBps) Copying: 509/1024 [MB] (11 MBps) Copying: 521/1024 [MB] (12 MBps) Copying: 535/1024 [MB] (14 MBps) Copying: 550/1024 [MB] (14 MBps) Copying: 565/1024 [MB] (14 MBps) Copying: 582/1024 [MB] (17 MBps) Copying: 600/1024 [MB] (17 MBps) Copying: 614/1024 [MB] (14 MBps) Copying: 631/1024 [MB] (17 MBps) Copying: 642/1024 [MB] (10 MBps) Copying: 654/1024 [MB] (11 MBps) Copying: 677/1024 [MB] (23 MBps) Copying: 693/1024 [MB] (16 MBps) Copying: 706/1024 [MB] (12 MBps) Copying: 724/1024 [MB] (17 MBps) Copying: 741/1024 [MB] (16 MBps) Copying: 759/1024 [MB] (18 MBps) Copying: 779/1024 [MB] (19 MBps) Copying: 796/1024 [MB] (17 MBps) Copying: 814/1024 [MB] (17 MBps) Copying: 834/1024 [MB] (20 MBps) Copying: 854/1024 [MB] (19 MBps) Copying: 872/1024 [MB] (18 MBps) Copying: 888/1024 [MB] (16 MBps) Copying: 901/1024 [MB] (13 MBps) Copying: 913/1024 [MB] (11 MBps) Copying: 924/1024 [MB] (10 MBps) Copying: 934/1024 [MB] (10 MBps) Copying: 966568/1048576 [kB] (10136 kBps) Copying: 976592/1048576 [kB] (10024 kBps) Copying: 986720/1048576 [kB] (10128 kBps) Copying: 978/1024 [MB] (14 MBps) Copying: 998/1024 [MB] (19 MBps) Copying: 1013/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:23:05.364 00:23:05.364 10:43:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:05.364 10:43:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:05.626 10:43:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:05.626 [2024-09-28 10:43:40.273919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.273977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:05.626 [2024-09-28 10:43:40.273993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:05.626 [2024-09-28 10:43:40.274003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.274026] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:05.626 [2024-09-28 10:43:40.274458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.274473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:05.626 [2024-09-28 10:43:40.274489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:23:05.626 [2024-09-28 10:43:40.274496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.277040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.277069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:05.626 [2024-09-28 10:43:40.277081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:23:05.626 [2024-09-28 10:43:40.277088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.294184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.294215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:05.626 [2024-09-28 10:43:40.294227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.076 ms 00:23:05.626 [2024-09-28 10:43:40.294237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.300424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.300448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:05.626 [2024-09-28 10:43:40.300460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.152 ms 00:23:05.626 [2024-09-28 10:43:40.300471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.302278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.302308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:05.626 [2024-09-28 10:43:40.302319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:23:05.626 [2024-09-28 10:43:40.302326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.306577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.306707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:05.626 [2024-09-28 10:43:40.306728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.214 ms 00:23:05.626 [2024-09-28 10:43:40.306745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.306864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.306873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:05.626 [2024-09-28 10:43:40.306882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:05.626 [2024-09-28 10:43:40.306890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.308771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.308801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:05.626 [2024-09-28 10:43:40.308812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.860 ms 00:23:05.626 [2024-09-28 10:43:40.308818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.310242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.310270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:05.626 [2024-09-28 10:43:40.310280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:23:05.626 [2024-09-28 10:43:40.310287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.311511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.311540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:05.626 [2024-09-28 10:43:40.311550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:23:05.626 [2024-09-28 10:43:40.311557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.312715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.626 [2024-09-28 10:43:40.312745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:05.626 [2024-09-28 10:43:40.312755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:23:05.626 [2024-09-28 10:43:40.312762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.626 [2024-09-28 10:43:40.312793] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:05.626 [2024-09-28 10:43:40.312806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:05.626 [2024-09-28 10:43:40.312919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.312926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.312935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.312943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.312951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.312973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.312985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.312992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:05.627 [2024-09-28 10:43:40.313675] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:05.627 [2024-09-28 10:43:40.313686] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e3420ac-c58b-4945-9d91-c714566e6f15 00:23:05.627 [2024-09-28 10:43:40.313694] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:05.627 [2024-09-28 10:43:40.313702] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:05.627 [2024-09-28 10:43:40.313709] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:05.627 [2024-09-28 10:43:40.313718] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:05.628 [2024-09-28 10:43:40.313725] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:05.628 [2024-09-28 10:43:40.313734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:05.628 [2024-09-28 10:43:40.313741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:05.628 [2024-09-28 10:43:40.313748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:05.628 [2024-09-28 10:43:40.313755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:05.628 [2024-09-28 10:43:40.313763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.628 [2024-09-28 10:43:40.313771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:05.628 [2024-09-28 10:43:40.313781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:23:05.628 [2024-09-28 10:43:40.313789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.315420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.628 [2024-09-28 10:43:40.315509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:05.628 [2024-09-28 10:43:40.315557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:23:05.628 [2024-09-28 10:43:40.315579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.315694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.628 [2024-09-28 10:43:40.315722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:05.628 [2024-09-28 10:43:40.315743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:05.628 [2024-09-28 10:43:40.315803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.320911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.321054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:05.628 [2024-09-28 10:43:40.321107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.321130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.321215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.321266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:05.628 [2024-09-28 10:43:40.321330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.321378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.321473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.321530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:05.628 [2024-09-28 10:43:40.321602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.321655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.321691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.321738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:05.628 [2024-09-28 10:43:40.321763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.321800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.330403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.330527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:05.628 [2024-09-28 10:43:40.330578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.330616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.338324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.338442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:05.628 [2024-09-28 10:43:40.338491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.338529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.338597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.338650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:05.628 [2024-09-28 10:43:40.338665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.338672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.338744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.338754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:05.628 [2024-09-28 10:43:40.338764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.338772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.338843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.338852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:05.628 [2024-09-28 10:43:40.338862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.338870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.338901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.338910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:05.628 [2024-09-28 10:43:40.338919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.338927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.338982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.338994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:05.628 [2024-09-28 10:43:40.339004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.339012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.339057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.628 [2024-09-28 10:43:40.339067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:05.628 [2024-09-28 10:43:40.339076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.628 [2024-09-28 10:43:40.339084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.628 [2024-09-28 10:43:40.339210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.256 ms, result 0 00:23:05.628 true 00:23:05.628 10:43:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 90135 00:23:05.628 10:43:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid90135 00:23:05.628 10:43:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:05.889 [2024-09-28 10:43:40.428990] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:05.889 [2024-09-28 10:43:40.429109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91058 ] 00:23:05.889 [2024-09-28 10:43:40.558119] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:05.889 [2024-09-28 10:43:40.577620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.889 [2024-09-28 10:43:40.613938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.335  Copying: 203/1024 [MB] (203 MBps) Copying: 464/1024 [MB] (261 MBps) Copying: 724/1024 [MB] (260 MBps) Copying: 983/1024 [MB] (258 MBps) Copying: 1024/1024 [MB] (average 246 MBps) 00:23:10.335 00:23:10.335 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 90135 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:10.335 10:43:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:10.335 [2024-09-28 10:43:45.047648] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:10.335 [2024-09-28 10:43:45.047767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91105 ] 00:23:10.594 [2024-09-28 10:43:45.176540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:10.594 [2024-09-28 10:43:45.192726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.594 [2024-09-28 10:43:45.225444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.594 [2024-09-28 10:43:45.307607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:10.594 [2024-09-28 10:43:45.307658] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:10.594 [2024-09-28 10:43:45.369547] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:10.853 [2024-09-28 10:43:45.369886] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:10.853 [2024-09-28 10:43:45.370138] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:10.853 [2024-09-28 10:43:45.534134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.853 [2024-09-28 10:43:45.534166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:10.853 [2024-09-28 10:43:45.534175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:10.853 [2024-09-28 10:43:45.534182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.853 [2024-09-28 10:43:45.534215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.853 [2024-09-28 10:43:45.534224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:10.853 [2024-09-28 10:43:45.534230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:10.853 [2024-09-28 10:43:45.534236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.853 [2024-09-28 10:43:45.534253] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:10.853 [2024-09-28 10:43:45.534428] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:10.853 [2024-09-28 10:43:45.534442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.853 [2024-09-28 10:43:45.534448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:10.853 [2024-09-28 10:43:45.534454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:23:10.853 [2024-09-28 10:43:45.534464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.853 [2024-09-28 10:43:45.535539] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:10.853 [2024-09-28 10:43:45.537474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.853 [2024-09-28 10:43:45.537502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:10.853 [2024-09-28 10:43:45.537510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:23:10.853 [2024-09-28 10:43:45.537516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.853 [2024-09-28 10:43:45.537557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.853 [2024-09-28 10:43:45.537564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:10.853 [2024-09-28 10:43:45.537571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:10.853 [2024-09-28 10:43:45.537577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.854 [2024-09-28 10:43:45.541795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.854 [2024-09-28 10:43:45.541818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:10.854 [2024-09-28 10:43:45.541825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.188 ms 00:23:10.854 [2024-09-28 10:43:45.541831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.854 [2024-09-28 10:43:45.541888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.854 [2024-09-28 10:43:45.541894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:10.854 [2024-09-28 10:43:45.541901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:10.854 [2024-09-28 10:43:45.541906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.854 [2024-09-28 10:43:45.541948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.854 [2024-09-28 10:43:45.541955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:10.854 [2024-09-28 10:43:45.541979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:10.854 [2024-09-28 10:43:45.541985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.854 [2024-09-28 10:43:45.542001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:10.854 [2024-09-28 10:43:45.543135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.854 [2024-09-28 10:43:45.543151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:10.854 [2024-09-28 10:43:45.543159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:23:10.854 [2024-09-28 10:43:45.543167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.854 [2024-09-28 10:43:45.543192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.854 [2024-09-28 10:43:45.543200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:10.854 [2024-09-28 10:43:45.543206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:10.854 [2024-09-28 10:43:45.543211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.854 [2024-09-28 10:43:45.543225] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:10.854 [2024-09-28 10:43:45.543239] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:10.854 [2024-09-28 10:43:45.543266] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:10.854 [2024-09-28 10:43:45.543281] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:10.854 [2024-09-28 10:43:45.543360] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:10.854 [2024-09-28 10:43:45.543369] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:10.854 [2024-09-28 10:43:45.543377] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:10.854 [2024-09-28 10:43:45.543384] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543391] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543397] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:10.854 [2024-09-28 10:43:45.543403] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:10.854 [2024-09-28 10:43:45.543409] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:10.854 [2024-09-28 10:43:45.543419] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:10.854 [2024-09-28 10:43:45.543426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.854 [2024-09-28 10:43:45.543432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:10.854 [2024-09-28 10:43:45.543438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:23:10.854 [2024-09-28 10:43:45.543443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.854 [2024-09-28 10:43:45.543505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.854 [2024-09-28 10:43:45.543513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:10.854 [2024-09-28 10:43:45.543520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:10.854 [2024-09-28 10:43:45.543528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.854 [2024-09-28 10:43:45.543603] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:10.854 [2024-09-28 10:43:45.543613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:10.854 [2024-09-28 10:43:45.543619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:10.854 [2024-09-28 10:43:45.543641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:10.854 [2024-09-28 10:43:45.543658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:10.854 [2024-09-28 10:43:45.543668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:10.854 [2024-09-28 10:43:45.543674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:10.854 [2024-09-28 10:43:45.543679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:10.854 [2024-09-28 10:43:45.543685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:10.854 [2024-09-28 10:43:45.543693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:10.854 [2024-09-28 10:43:45.543698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:10.854 [2024-09-28 10:43:45.543708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:10.854 [2024-09-28 10:43:45.543724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:10.854 [2024-09-28 10:43:45.543739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:10.854 [2024-09-28 10:43:45.543753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:10.854 [2024-09-28 10:43:45.543769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:10.854 [2024-09-28 10:43:45.543791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:10.854 [2024-09-28 10:43:45.543802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:10.854 [2024-09-28 10:43:45.543808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:10.854 [2024-09-28 10:43:45.543813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:10.854 [2024-09-28 10:43:45.543819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:10.854 [2024-09-28 10:43:45.543825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:10.854 [2024-09-28 10:43:45.543830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:10.854 [2024-09-28 10:43:45.543841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:10.854 [2024-09-28 10:43:45.543848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543855] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:10.854 [2024-09-28 10:43:45.543864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:10.854 [2024-09-28 10:43:45.543870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.854 [2024-09-28 10:43:45.543884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:10.854 [2024-09-28 10:43:45.543890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:10.854 [2024-09-28 10:43:45.543896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:10.854 [2024-09-28 10:43:45.543902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:10.854 [2024-09-28 10:43:45.543907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:10.854 [2024-09-28 10:43:45.543914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:10.854 [2024-09-28 10:43:45.543921] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:10.854 [2024-09-28 10:43:45.543929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:10.854 [2024-09-28 10:43:45.543935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:10.854 [2024-09-28 10:43:45.543942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:10.854 [2024-09-28 10:43:45.543948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:10.854 [2024-09-28 10:43:45.543954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:10.854 [2024-09-28 10:43:45.543981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:10.854 [2024-09-28 10:43:45.543988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:10.854 [2024-09-28 10:43:45.543994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:10.855 [2024-09-28 10:43:45.544002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:10.855 [2024-09-28 10:43:45.544008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:10.855 [2024-09-28 10:43:45.544014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:10.855 [2024-09-28 10:43:45.544021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:10.855 [2024-09-28 10:43:45.544027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:10.855 [2024-09-28 10:43:45.544034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:10.855 [2024-09-28 10:43:45.544040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:10.855 [2024-09-28 10:43:45.544046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:10.855 [2024-09-28 10:43:45.544056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:10.855 [2024-09-28 10:43:45.544067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:10.855 [2024-09-28 10:43:45.544074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:10.855 [2024-09-28 10:43:45.544081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:10.855 [2024-09-28 10:43:45.544087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:10.855 [2024-09-28 10:43:45.544094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.544100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:10.855 [2024-09-28 10:43:45.544107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:23:10.855 [2024-09-28 10:43:45.544115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.562857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.562895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:10.855 [2024-09-28 10:43:45.562907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.708 ms 00:23:10.855 [2024-09-28 10:43:45.562915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.563030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.563044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:10.855 [2024-09-28 10:43:45.563052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:10.855 [2024-09-28 10:43:45.563060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.571047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.571078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:10.855 [2024-09-28 10:43:45.571089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.916 ms 00:23:10.855 [2024-09-28 10:43:45.571097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.571131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.571140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:10.855 [2024-09-28 10:43:45.571150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:10.855 [2024-09-28 10:43:45.571158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.571475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.571491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:10.855 [2024-09-28 10:43:45.571504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:10.855 [2024-09-28 10:43:45.571518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.571647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.571660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:10.855 [2024-09-28 10:43:45.571669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:23:10.855 [2024-09-28 10:43:45.571679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.576371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.576401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:10.855 [2024-09-28 10:43:45.576412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.668 ms 00:23:10.855 [2024-09-28 10:43:45.576420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.578955] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:10.855 [2024-09-28 10:43:45.579005] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:10.855 [2024-09-28 10:43:45.579017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.579026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:10.855 [2024-09-28 10:43:45.579038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.500 ms 00:23:10.855 [2024-09-28 10:43:45.579046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.590376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.590403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:10.855 [2024-09-28 10:43:45.590412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.288 ms 00:23:10.855 [2024-09-28 10:43:45.590418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.592094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.592119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:10.855 [2024-09-28 10:43:45.592126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:23:10.855 [2024-09-28 10:43:45.592132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.593566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.593683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:10.855 [2024-09-28 10:43:45.593694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:23:10.855 [2024-09-28 10:43:45.593700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.593939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.593951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:10.855 [2024-09-28 10:43:45.593975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:23:10.855 [2024-09-28 10:43:45.593985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.607921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.607953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:10.855 [2024-09-28 10:43:45.607976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.923 ms 00:23:10.855 [2024-09-28 10:43:45.607983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.613741] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:10.855 [2024-09-28 10:43:45.615866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.615993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:10.855 [2024-09-28 10:43:45.616005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.838 ms 00:23:10.855 [2024-09-28 10:43:45.616012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.616057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.616065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:10.855 [2024-09-28 10:43:45.616071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:10.855 [2024-09-28 10:43:45.616081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.616134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.616141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:10.855 [2024-09-28 10:43:45.616147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:10.855 [2024-09-28 10:43:45.616154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.616169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.616175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:10.855 [2024-09-28 10:43:45.616182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:10.855 [2024-09-28 10:43:45.616190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.616216] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:10.855 [2024-09-28 10:43:45.616226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.616232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:10.855 [2024-09-28 10:43:45.616238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:10.855 [2024-09-28 10:43:45.616244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.619859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.619967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:10.855 [2024-09-28 10:43:45.619979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.603 ms 00:23:10.855 [2024-09-28 10:43:45.619985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.620038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.855 [2024-09-28 10:43:45.620045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:10.855 [2024-09-28 10:43:45.620052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:10.855 [2024-09-28 10:43:45.620058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.855 [2024-09-28 10:43:45.620768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 86.320 ms, result 0 00:24:20.132  Copying: 32/1024 [MB] (32 MBps) Copying: 60/1024 [MB] (27 MBps) Copying: 85/1024 [MB] (24 MBps) Copying: 100/1024 [MB] (15 MBps) Copying: 115/1024 [MB] (14 MBps) Copying: 150/1024 [MB] (34 MBps) Copying: 173/1024 [MB] (22 MBps) Copying: 186/1024 [MB] (13 MBps) Copying: 203/1024 [MB] (16 MBps) Copying: 214/1024 [MB] (11 MBps) Copying: 226/1024 [MB] (12 MBps) Copying: 244/1024 [MB] (18 MBps) Copying: 260/1024 [MB] (15 MBps) Copying: 279/1024 [MB] (18 MBps) Copying: 292/1024 [MB] (13 MBps) Copying: 311/1024 [MB] (18 MBps) Copying: 326/1024 [MB] (14 MBps) Copying: 336/1024 [MB] (10 MBps) Copying: 346/1024 [MB] (10 MBps) Copying: 362/1024 [MB] (16 MBps) Copying: 373/1024 [MB] (11 MBps) Copying: 383/1024 [MB] (10 MBps) Copying: 394/1024 [MB] (10 MBps) Copying: 410/1024 [MB] (15 MBps) Copying: 426/1024 [MB] (16 MBps) Copying: 444/1024 [MB] (17 MBps) Copying: 464/1024 [MB] (20 MBps) Copying: 475/1024 [MB] (10 MBps) Copying: 486/1024 [MB] (11 MBps) Copying: 498/1024 [MB] (11 MBps) Copying: 516/1024 [MB] (18 MBps) Copying: 529/1024 [MB] (13 MBps) Copying: 540/1024 [MB] (10 MBps) Copying: 551/1024 [MB] (11 MBps) Copying: 563/1024 [MB] (12 MBps) Copying: 577/1024 [MB] (14 MBps) Copying: 587/1024 [MB] (10 MBps) Copying: 602/1024 [MB] (15 MBps) Copying: 618/1024 [MB] (15 MBps) Copying: 634/1024 [MB] (15 MBps) Copying: 652/1024 [MB] (18 MBps) Copying: 667/1024 [MB] (14 MBps) Copying: 681/1024 [MB] (13 MBps) Copying: 692/1024 [MB] (10 MBps) Copying: 718920/1048576 [kB] (10128 kBps) Copying: 713/1024 [MB] (11 MBps) Copying: 725/1024 [MB] (12 MBps) Copying: 738/1024 [MB] (12 MBps) Copying: 750/1024 [MB] (12 MBps) Copying: 761/1024 [MB] (11 MBps) Copying: 772/1024 [MB] (11 MBps) Copying: 783/1024 [MB] (10 MBps) Copying: 793/1024 [MB] (10 MBps) Copying: 804/1024 [MB] (11 MBps) Copying: 815/1024 [MB] (11 MBps) Copying: 845552/1048576 [kB] (10108 kBps) Copying: 835/1024 [MB] (10 MBps) Copying: 846/1024 [MB] (10 MBps) Copying: 856/1024 [MB] (10 MBps) Copying: 867/1024 [MB] (10 MBps) Copying: 897/1024 [MB] (30 MBps) Copying: 925/1024 [MB] (27 MBps) Copying: 942/1024 [MB] (17 MBps) Copying: 964/1024 [MB] (21 MBps) Copying: 976/1024 [MB] (11 MBps) Copying: 988/1024 [MB] (11 MBps) Copying: 1002/1024 [MB] (14 MBps) Copying: 1018/1024 [MB] (16 MBps) Copying: 1048432/1048576 [kB] (5208 kBps) Copying: 1024/1024 [MB] (average 14 MBps)[2024-09-28 10:44:54.776845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.132 [2024-09-28 10:44:54.777091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:20.132 [2024-09-28 10:44:54.777119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:20.132 [2024-09-28 10:44:54.777130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.132 [2024-09-28 10:44:54.779353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:20.132 [2024-09-28 10:44:54.782055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.132 [2024-09-28 10:44:54.782102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:20.132 [2024-09-28 10:44:54.782132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:24:20.132 [2024-09-28 10:44:54.782142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.132 [2024-09-28 10:44:54.793686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.132 [2024-09-28 10:44:54.793733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:20.132 [2024-09-28 10:44:54.793748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.523 ms 00:24:20.132 [2024-09-28 10:44:54.793767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.132 [2024-09-28 10:44:54.820956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.133 [2024-09-28 10:44:54.821039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:20.133 [2024-09-28 10:44:54.821052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.169 ms 00:24:20.133 [2024-09-28 10:44:54.821067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.133 [2024-09-28 10:44:54.827412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.133 [2024-09-28 10:44:54.827460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:20.133 [2024-09-28 10:44:54.827481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.306 ms 00:24:20.133 [2024-09-28 10:44:54.827489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.133 [2024-09-28 10:44:54.830220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.133 [2024-09-28 10:44:54.830267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:20.133 [2024-09-28 10:44:54.830278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.668 ms 00:24:20.133 [2024-09-28 10:44:54.830287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.133 [2024-09-28 10:44:54.834718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.133 [2024-09-28 10:44:54.834774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:20.133 [2024-09-28 10:44:54.834786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.389 ms 00:24:20.133 [2024-09-28 10:44:54.834795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.396 [2024-09-28 10:44:55.006080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.396 [2024-09-28 10:44:55.006270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:20.396 [2024-09-28 10:44:55.006294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 171.236 ms 00:24:20.396 [2024-09-28 10:44:55.006303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.396 [2024-09-28 10:44:55.009421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.396 [2024-09-28 10:44:55.009607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:20.396 [2024-09-28 10:44:55.009625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:24:20.396 [2024-09-28 10:44:55.009633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.396 [2024-09-28 10:44:55.012687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.396 [2024-09-28 10:44:55.012872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:20.396 [2024-09-28 10:44:55.012891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:24:20.396 [2024-09-28 10:44:55.012899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.396 [2024-09-28 10:44:55.015398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.396 [2024-09-28 10:44:55.015450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:20.396 [2024-09-28 10:44:55.015462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:24:20.396 [2024-09-28 10:44:55.015471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.396 [2024-09-28 10:44:55.017496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.396 [2024-09-28 10:44:55.017544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:20.396 [2024-09-28 10:44:55.017554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.956 ms 00:24:20.396 [2024-09-28 10:44:55.017562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.396 [2024-09-28 10:44:55.017602] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:20.396 [2024-09-28 10:44:55.017618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101888 / 261120 wr_cnt: 1 state: open 00:24:20.396 [2024-09-28 10:44:55.017640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.017988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:20.396 [2024-09-28 10:44:55.018139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:20.397 [2024-09-28 10:44:55.018516] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:20.397 [2024-09-28 10:44:55.018526] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e3420ac-c58b-4945-9d91-c714566e6f15 00:24:20.397 [2024-09-28 10:44:55.018561] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101888 00:24:20.397 [2024-09-28 10:44:55.018570] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102848 00:24:20.397 [2024-09-28 10:44:55.018578] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101888 00:24:20.397 [2024-09-28 10:44:55.018588] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:24:20.397 [2024-09-28 10:44:55.018595] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:20.397 [2024-09-28 10:44:55.018606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:20.397 [2024-09-28 10:44:55.018614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:20.397 [2024-09-28 10:44:55.018637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:20.397 [2024-09-28 10:44:55.018647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:20.397 [2024-09-28 10:44:55.018656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.397 [2024-09-28 10:44:55.018664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:20.397 [2024-09-28 10:44:55.018673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:24:20.397 [2024-09-28 10:44:55.018685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.020901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.397 [2024-09-28 10:44:55.020944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:20.397 [2024-09-28 10:44:55.020954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.196 ms 00:24:20.397 [2024-09-28 10:44:55.020996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.021143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.397 [2024-09-28 10:44:55.021159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:20.397 [2024-09-28 10:44:55.021169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:24:20.397 [2024-09-28 10:44:55.021177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.028025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.397 [2024-09-28 10:44:55.028068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:20.397 [2024-09-28 10:44:55.028079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.397 [2024-09-28 10:44:55.028088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.028150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.397 [2024-09-28 10:44:55.028166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:20.397 [2024-09-28 10:44:55.028175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.397 [2024-09-28 10:44:55.028183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.028229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.397 [2024-09-28 10:44:55.028240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:20.397 [2024-09-28 10:44:55.028249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.397 [2024-09-28 10:44:55.028257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.028272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.397 [2024-09-28 10:44:55.028281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:20.397 [2024-09-28 10:44:55.028293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.397 [2024-09-28 10:44:55.028300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.041849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.397 [2024-09-28 10:44:55.041901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:20.397 [2024-09-28 10:44:55.041913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.397 [2024-09-28 10:44:55.041922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.053236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.397 [2024-09-28 10:44:55.053298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:20.397 [2024-09-28 10:44:55.053310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.397 [2024-09-28 10:44:55.053319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.053376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.397 [2024-09-28 10:44:55.053386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:20.397 [2024-09-28 10:44:55.053397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.397 [2024-09-28 10:44:55.053405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.053470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.397 [2024-09-28 10:44:55.053482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:20.397 [2024-09-28 10:44:55.053491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.397 [2024-09-28 10:44:55.053502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.397 [2024-09-28 10:44:55.053576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.398 [2024-09-28 10:44:55.053588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:20.398 [2024-09-28 10:44:55.053597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.398 [2024-09-28 10:44:55.053605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.398 [2024-09-28 10:44:55.053635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.398 [2024-09-28 10:44:55.053646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:20.398 [2024-09-28 10:44:55.053655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.398 [2024-09-28 10:44:55.053663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.398 [2024-09-28 10:44:55.053707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.398 [2024-09-28 10:44:55.053718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:20.398 [2024-09-28 10:44:55.053728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.398 [2024-09-28 10:44:55.053736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.398 [2024-09-28 10:44:55.053784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.398 [2024-09-28 10:44:55.053796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:20.398 [2024-09-28 10:44:55.053805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.398 [2024-09-28 10:44:55.053816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.398 [2024-09-28 10:44:55.053990] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.910 ms, result 0 00:24:21.341 00:24:21.341 00:24:21.602 10:44:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:23.515 10:44:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:23.515 [2024-09-28 10:44:58.173211] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:23.515 [2024-09-28 10:44:58.173338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91857 ] 00:24:23.776 [2024-09-28 10:44:58.304302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:23.776 [2024-09-28 10:44:58.321395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.776 [2024-09-28 10:44:58.374126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.776 [2024-09-28 10:44:58.489434] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:23.776 [2024-09-28 10:44:58.489519] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:24.039 [2024-09-28 10:44:58.651637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.651889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:24.039 [2024-09-28 10:44:58.651915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:24.039 [2024-09-28 10:44:58.651924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.652025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.652038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.039 [2024-09-28 10:44:58.652048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:24.039 [2024-09-28 10:44:58.652057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.652090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:24.039 [2024-09-28 10:44:58.652349] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:24.039 [2024-09-28 10:44:58.652366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.652375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.039 [2024-09-28 10:44:58.652385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:24:24.039 [2024-09-28 10:44:58.652398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.654064] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:24.039 [2024-09-28 10:44:58.657795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.657849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:24.039 [2024-09-28 10:44:58.657864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.733 ms 00:24:24.039 [2024-09-28 10:44:58.657872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.657945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.657955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:24.039 [2024-09-28 10:44:58.657981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:24.039 [2024-09-28 10:44:58.657989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.665940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.666164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.039 [2024-09-28 10:44:58.666182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.908 ms 00:24:24.039 [2024-09-28 10:44:58.666201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.666290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.666300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.039 [2024-09-28 10:44:58.666314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:24.039 [2024-09-28 10:44:58.666324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.666381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.666391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:24.039 [2024-09-28 10:44:58.666400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:24.039 [2024-09-28 10:44:58.666413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.666439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:24.039 [2024-09-28 10:44:58.668462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.668498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.039 [2024-09-28 10:44:58.668508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.029 ms 00:24:24.039 [2024-09-28 10:44:58.668517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.668556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.668566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:24.039 [2024-09-28 10:44:58.668574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:24.039 [2024-09-28 10:44:58.668582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.668608] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:24.039 [2024-09-28 10:44:58.668636] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:24.039 [2024-09-28 10:44:58.668677] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:24.039 [2024-09-28 10:44:58.668694] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:24.039 [2024-09-28 10:44:58.668805] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:24.039 [2024-09-28 10:44:58.668819] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:24.039 [2024-09-28 10:44:58.668831] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:24.039 [2024-09-28 10:44:58.668844] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:24.039 [2024-09-28 10:44:58.668854] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:24.039 [2024-09-28 10:44:58.668864] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:24.039 [2024-09-28 10:44:58.668872] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:24.039 [2024-09-28 10:44:58.668880] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:24.039 [2024-09-28 10:44:58.668891] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:24.039 [2024-09-28 10:44:58.668901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.668909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:24.039 [2024-09-28 10:44:58.668921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:24:24.039 [2024-09-28 10:44:58.668931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.669032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.039 [2024-09-28 10:44:58.669045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:24.039 [2024-09-28 10:44:58.669054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:24.039 [2024-09-28 10:44:58.669061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.039 [2024-09-28 10:44:58.669165] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:24.039 [2024-09-28 10:44:58.669177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:24.039 [2024-09-28 10:44:58.669187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:24.040 [2024-09-28 10:44:58.669196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:24.040 [2024-09-28 10:44:58.669216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:24.040 [2024-09-28 10:44:58.669239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:24.040 [2024-09-28 10:44:58.669258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:24.040 [2024-09-28 10:44:58.669275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:24.040 [2024-09-28 10:44:58.669283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:24.040 [2024-09-28 10:44:58.669292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:24.040 [2024-09-28 10:44:58.669301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:24.040 [2024-09-28 10:44:58.669309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:24.040 [2024-09-28 10:44:58.669317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:24.040 [2024-09-28 10:44:58.669336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:24.040 [2024-09-28 10:44:58.669344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:24.040 [2024-09-28 10:44:58.669361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.040 [2024-09-28 10:44:58.669380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:24.040 [2024-09-28 10:44:58.669389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.040 [2024-09-28 10:44:58.669404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:24.040 [2024-09-28 10:44:58.669412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.040 [2024-09-28 10:44:58.669429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:24.040 [2024-09-28 10:44:58.669437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.040 [2024-09-28 10:44:58.669453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:24.040 [2024-09-28 10:44:58.669460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:24.040 [2024-09-28 10:44:58.669477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:24.040 [2024-09-28 10:44:58.669484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:24.040 [2024-09-28 10:44:58.669491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:24.040 [2024-09-28 10:44:58.669498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:24.040 [2024-09-28 10:44:58.669506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:24.040 [2024-09-28 10:44:58.669514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:24.040 [2024-09-28 10:44:58.669527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:24.040 [2024-09-28 10:44:58.669535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669541] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:24.040 [2024-09-28 10:44:58.669549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:24.040 [2024-09-28 10:44:58.669559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:24.040 [2024-09-28 10:44:58.669568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.040 [2024-09-28 10:44:58.669576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:24.040 [2024-09-28 10:44:58.669583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:24.040 [2024-09-28 10:44:58.669593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:24.040 [2024-09-28 10:44:58.669601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:24.040 [2024-09-28 10:44:58.669609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:24.040 [2024-09-28 10:44:58.669616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:24.040 [2024-09-28 10:44:58.669625] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:24.040 [2024-09-28 10:44:58.669636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:24.040 [2024-09-28 10:44:58.669646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:24.040 [2024-09-28 10:44:58.669653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:24.040 [2024-09-28 10:44:58.669661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:24.040 [2024-09-28 10:44:58.669668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:24.040 [2024-09-28 10:44:58.669675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:24.040 [2024-09-28 10:44:58.669682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:24.040 [2024-09-28 10:44:58.669690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:24.040 [2024-09-28 10:44:58.669697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:24.040 [2024-09-28 10:44:58.669704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:24.040 [2024-09-28 10:44:58.669711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:24.040 [2024-09-28 10:44:58.669718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:24.040 [2024-09-28 10:44:58.669725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:24.040 [2024-09-28 10:44:58.669733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:24.040 [2024-09-28 10:44:58.669740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:24.040 [2024-09-28 10:44:58.669747] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:24.040 [2024-09-28 10:44:58.669761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:24.040 [2024-09-28 10:44:58.669770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:24.040 [2024-09-28 10:44:58.669779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:24.040 [2024-09-28 10:44:58.669786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:24.040 [2024-09-28 10:44:58.669793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:24.040 [2024-09-28 10:44:58.669801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.040 [2024-09-28 10:44:58.669808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:24.040 [2024-09-28 10:44:58.669816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:24:24.040 [2024-09-28 10:44:58.669826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.040 [2024-09-28 10:44:58.694977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.040 [2024-09-28 10:44:58.695167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.040 [2024-09-28 10:44:58.695242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.085 ms 00:24:24.040 [2024-09-28 10:44:58.695277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.040 [2024-09-28 10:44:58.695424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.040 [2024-09-28 10:44:58.695471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:24.040 [2024-09-28 10:44:58.695499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:24.040 [2024-09-28 10:44:58.695527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.040 [2024-09-28 10:44:58.708377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.040 [2024-09-28 10:44:58.708546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.040 [2024-09-28 10:44:58.708615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.747 ms 00:24:24.040 [2024-09-28 10:44:58.708648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.040 [2024-09-28 10:44:58.708696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.040 [2024-09-28 10:44:58.708719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.040 [2024-09-28 10:44:58.708739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:24.040 [2024-09-28 10:44:58.708759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.040 [2024-09-28 10:44:58.709326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.040 [2024-09-28 10:44:58.709581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.040 [2024-09-28 10:44:58.709649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:24:24.040 [2024-09-28 10:44:58.709672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.040 [2024-09-28 10:44:58.709843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.040 [2024-09-28 10:44:58.709867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.040 [2024-09-28 10:44:58.709936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:24:24.040 [2024-09-28 10:44:58.709984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.040 [2024-09-28 10:44:58.716803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.040 [2024-09-28 10:44:58.716947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.041 [2024-09-28 10:44:58.717030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.778 ms 00:24:24.041 [2024-09-28 10:44:58.717060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.720759] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:24.041 [2024-09-28 10:44:58.720923] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:24.041 [2024-09-28 10:44:58.721012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.721033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:24.041 [2024-09-28 10:44:58.721053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:24:24.041 [2024-09-28 10:44:58.721084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.736735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.736900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:24.041 [2024-09-28 10:44:58.736919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.593 ms 00:24:24.041 [2024-09-28 10:44:58.736930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.739571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.739617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:24.041 [2024-09-28 10:44:58.739628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:24:24.041 [2024-09-28 10:44:58.739636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.742222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.742367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:24.041 [2024-09-28 10:44:58.742421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:24:24.041 [2024-09-28 10:44:58.742444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.743530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.743842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:24.041 [2024-09-28 10:44:58.743885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:24:24.041 [2024-09-28 10:44:58.743905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.771903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.771990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:24.041 [2024-09-28 10:44:58.772005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.892 ms 00:24:24.041 [2024-09-28 10:44:58.772014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.780247] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:24.041 [2024-09-28 10:44:58.783621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.783674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:24.041 [2024-09-28 10:44:58.783686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.550 ms 00:24:24.041 [2024-09-28 10:44:58.783694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.783787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.783798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:24.041 [2024-09-28 10:44:58.783808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:24.041 [2024-09-28 10:44:58.783816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.785594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.785642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:24.041 [2024-09-28 10:44:58.785656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:24:24.041 [2024-09-28 10:44:58.785665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.785694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.785703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:24.041 [2024-09-28 10:44:58.785713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:24.041 [2024-09-28 10:44:58.785726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.785768] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:24.041 [2024-09-28 10:44:58.785779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.785788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:24.041 [2024-09-28 10:44:58.785797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:24.041 [2024-09-28 10:44:58.785808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.791913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.791988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:24.041 [2024-09-28 10:44:58.792000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.085 ms 00:24:24.041 [2024-09-28 10:44:58.792009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.792095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.041 [2024-09-28 10:44:58.792112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:24.041 [2024-09-28 10:44:58.792123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:24.041 [2024-09-28 10:44:58.792132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.041 [2024-09-28 10:44:58.793334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 141.206 ms, result 0 00:25:07.713  Copying: 1080/1048576 [kB] (1080 kBps) Copying: 4804/1048576 [kB] (3724 kBps) Copying: 19/1024 [MB] (14 MBps) Copying: 48/1024 [MB] (28 MBps) Copying: 70/1024 [MB] (22 MBps) Copying: 106/1024 [MB] (35 MBps) Copying: 135/1024 [MB] (28 MBps) Copying: 163/1024 [MB] (28 MBps) Copying: 191/1024 [MB] (28 MBps) Copying: 221/1024 [MB] (29 MBps) Copying: 250/1024 [MB] (29 MBps) Copying: 279/1024 [MB] (28 MBps) Copying: 305/1024 [MB] (26 MBps) Copying: 327/1024 [MB] (21 MBps) Copying: 346/1024 [MB] (19 MBps) Copying: 377/1024 [MB] (30 MBps) Copying: 396/1024 [MB] (19 MBps) Copying: 426/1024 [MB] (29 MBps) Copying: 451/1024 [MB] (25 MBps) Copying: 484/1024 [MB] (32 MBps) Copying: 512/1024 [MB] (27 MBps) Copying: 536/1024 [MB] (24 MBps) Copying: 560/1024 [MB] (23 MBps) Copying: 588/1024 [MB] (28 MBps) Copying: 605/1024 [MB] (16 MBps) Copying: 633/1024 [MB] (27 MBps) Copying: 657/1024 [MB] (23 MBps) Copying: 680/1024 [MB] (23 MBps) Copying: 707/1024 [MB] (26 MBps) Copying: 729/1024 [MB] (21 MBps) Copying: 754/1024 [MB] (24 MBps) Copying: 774/1024 [MB] (20 MBps) Copying: 801/1024 [MB] (27 MBps) Copying: 828/1024 [MB] (26 MBps) Copying: 850/1024 [MB] (22 MBps) Copying: 876/1024 [MB] (26 MBps) Copying: 906/1024 [MB] (29 MBps) Copying: 935/1024 [MB] (28 MBps) Copying: 957/1024 [MB] (22 MBps) Copying: 973/1024 [MB] (16 MBps) Copying: 989/1024 [MB] (15 MBps) Copying: 1005/1024 [MB] (16 MBps) Copying: 1021/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-09-28 10:45:42.430017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.430386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:07.713 [2024-09-28 10:45:42.430472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:07.713 [2024-09-28 10:45:42.430503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.430556] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:07.713 [2024-09-28 10:45:42.431415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.431592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:07.713 [2024-09-28 10:45:42.432189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:25:07.713 [2024-09-28 10:45:42.432256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.432679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.432793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:07.713 [2024-09-28 10:45:42.432865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:25:07.713 [2024-09-28 10:45:42.432894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.448474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.448660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:07.713 [2024-09-28 10:45:42.448885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.540 ms 00:25:07.713 [2024-09-28 10:45:42.448919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.455791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.455955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:07.713 [2024-09-28 10:45:42.456161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.800 ms 00:25:07.713 [2024-09-28 10:45:42.456204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.459232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.459399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:07.713 [2024-09-28 10:45:42.459458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.943 ms 00:25:07.713 [2024-09-28 10:45:42.459480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.464251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.464411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:07.713 [2024-09-28 10:45:42.464480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.607 ms 00:25:07.713 [2024-09-28 10:45:42.464503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.469074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.469209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:07.713 [2024-09-28 10:45:42.469227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.519 ms 00:25:07.713 [2024-09-28 10:45:42.469236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.472554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.472717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:07.713 [2024-09-28 10:45:42.472733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.296 ms 00:25:07.713 [2024-09-28 10:45:42.472741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.475548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.475598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:07.713 [2024-09-28 10:45:42.475608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.771 ms 00:25:07.713 [2024-09-28 10:45:42.475615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.477776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.477926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:07.713 [2024-09-28 10:45:42.478017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:25:07.713 [2024-09-28 10:45:42.478040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.480275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.713 [2024-09-28 10:45:42.480432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:07.713 [2024-09-28 10:45:42.480484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:25:07.713 [2024-09-28 10:45:42.480495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.713 [2024-09-28 10:45:42.480638] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:07.713 [2024-09-28 10:45:42.480670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:07.713 [2024-09-28 10:45:42.480683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:07.713 [2024-09-28 10:45:42.480691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:07.713 [2024-09-28 10:45:42.480701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:07.713 [2024-09-28 10:45:42.480709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:07.713 [2024-09-28 10:45:42.480717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:07.713 [2024-09-28 10:45:42.480725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:07.713 [2024-09-28 10:45:42.480734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:07.713 [2024-09-28 10:45:42.480742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.480993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:07.714 [2024-09-28 10:45:42.481498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:07.715 [2024-09-28 10:45:42.481506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:07.715 [2024-09-28 10:45:42.481514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:07.715 [2024-09-28 10:45:42.481530] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:07.715 [2024-09-28 10:45:42.481545] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e3420ac-c58b-4945-9d91-c714566e6f15 00:25:07.715 [2024-09-28 10:45:42.481556] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:07.715 [2024-09-28 10:45:42.481564] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 162752 00:25:07.715 [2024-09-28 10:45:42.481571] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 160768 00:25:07.715 [2024-09-28 10:45:42.481580] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0123 00:25:07.715 [2024-09-28 10:45:42.481589] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:07.715 [2024-09-28 10:45:42.481605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:07.715 [2024-09-28 10:45:42.481613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:07.715 [2024-09-28 10:45:42.481619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:07.715 [2024-09-28 10:45:42.481627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:07.715 [2024-09-28 10:45:42.481636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.715 [2024-09-28 10:45:42.481644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:07.715 [2024-09-28 10:45:42.481653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:25:07.715 [2024-09-28 10:45:42.481660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.715 [2024-09-28 10:45:42.484238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.715 [2024-09-28 10:45:42.484272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:07.715 [2024-09-28 10:45:42.484283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:25:07.715 [2024-09-28 10:45:42.484292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.715 [2024-09-28 10:45:42.484428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.715 [2024-09-28 10:45:42.484439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:07.715 [2024-09-28 10:45:42.484455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:25:07.715 [2024-09-28 10:45:42.484466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.491666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.491851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:07.977 [2024-09-28 10:45:42.491870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.491879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.491938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.491948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:07.977 [2024-09-28 10:45:42.491956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.491995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.492059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.492071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:07.977 [2024-09-28 10:45:42.492086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.492094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.492110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.492118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:07.977 [2024-09-28 10:45:42.492126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.492134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.507559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.507612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:07.977 [2024-09-28 10:45:42.507624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.507633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.518587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.518772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:07.977 [2024-09-28 10:45:42.518789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.518806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.518859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.518869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:07.977 [2024-09-28 10:45:42.518878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.518892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.518926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.518935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:07.977 [2024-09-28 10:45:42.518944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.518952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.519062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.519075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:07.977 [2024-09-28 10:45:42.519086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.519094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.519124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.519138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:07.977 [2024-09-28 10:45:42.519147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.519155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.519205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.519216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:07.977 [2024-09-28 10:45:42.519224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.519232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.519281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.977 [2024-09-28 10:45:42.519293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:07.977 [2024-09-28 10:45:42.519301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.977 [2024-09-28 10:45:42.519310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.977 [2024-09-28 10:45:42.519445] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 89.400 ms, result 0 00:25:07.977 00:25:07.977 00:25:07.977 10:45:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:10.525 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:10.525 10:45:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:10.525 [2024-09-28 10:45:44.903308] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:25:10.525 [2024-09-28 10:45:44.903396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92333 ] 00:25:10.525 [2024-09-28 10:45:45.026542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:10.525 [2024-09-28 10:45:45.045575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.525 [2024-09-28 10:45:45.083567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.525 [2024-09-28 10:45:45.194170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:10.525 [2024-09-28 10:45:45.194254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:10.789 [2024-09-28 10:45:45.356565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.356625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:10.789 [2024-09-28 10:45:45.356641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:10.789 [2024-09-28 10:45:45.356650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.356705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.356720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:10.789 [2024-09-28 10:45:45.356730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:10.789 [2024-09-28 10:45:45.356738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.356762] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:10.789 [2024-09-28 10:45:45.357057] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:10.789 [2024-09-28 10:45:45.357077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.357087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:10.789 [2024-09-28 10:45:45.357096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:25:10.789 [2024-09-28 10:45:45.357107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.358802] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:10.789 [2024-09-28 10:45:45.362796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.363028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:10.789 [2024-09-28 10:45:45.363058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.996 ms 00:25:10.789 [2024-09-28 10:45:45.363068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.363159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.363171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:10.789 [2024-09-28 10:45:45.363179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:10.789 [2024-09-28 10:45:45.363193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.371273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.371317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:10.789 [2024-09-28 10:45:45.371328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.035 ms 00:25:10.789 [2024-09-28 10:45:45.371343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.371433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.371443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:10.789 [2024-09-28 10:45:45.371454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:10.789 [2024-09-28 10:45:45.371462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.371521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.371537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:10.789 [2024-09-28 10:45:45.371550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:10.789 [2024-09-28 10:45:45.371558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.371584] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:10.789 [2024-09-28 10:45:45.373691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.373730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:10.789 [2024-09-28 10:45:45.373740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:25:10.789 [2024-09-28 10:45:45.373756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.373796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.373805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:10.789 [2024-09-28 10:45:45.373815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:10.789 [2024-09-28 10:45:45.373825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.373851] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:10.789 [2024-09-28 10:45:45.373872] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:10.789 [2024-09-28 10:45:45.373909] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:10.789 [2024-09-28 10:45:45.373927] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:10.789 [2024-09-28 10:45:45.374061] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:10.789 [2024-09-28 10:45:45.374076] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:10.789 [2024-09-28 10:45:45.374092] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:10.789 [2024-09-28 10:45:45.374107] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:10.789 [2024-09-28 10:45:45.374117] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:10.789 [2024-09-28 10:45:45.374126] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:10.789 [2024-09-28 10:45:45.374133] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:10.789 [2024-09-28 10:45:45.374143] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:10.789 [2024-09-28 10:45:45.374152] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:10.789 [2024-09-28 10:45:45.374164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.374171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:10.789 [2024-09-28 10:45:45.374182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:25:10.789 [2024-09-28 10:45:45.374190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.374273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.789 [2024-09-28 10:45:45.374284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:10.789 [2024-09-28 10:45:45.374291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:10.789 [2024-09-28 10:45:45.374299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.789 [2024-09-28 10:45:45.374398] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:10.789 [2024-09-28 10:45:45.374410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:10.789 [2024-09-28 10:45:45.374424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:10.789 [2024-09-28 10:45:45.374433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.789 [2024-09-28 10:45:45.374443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:10.789 [2024-09-28 10:45:45.374451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:10.789 [2024-09-28 10:45:45.374460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:10.789 [2024-09-28 10:45:45.374470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:10.789 [2024-09-28 10:45:45.374485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:10.789 [2024-09-28 10:45:45.374496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:10.789 [2024-09-28 10:45:45.374508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:10.789 [2024-09-28 10:45:45.374516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:10.789 [2024-09-28 10:45:45.374524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:10.789 [2024-09-28 10:45:45.374532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:10.789 [2024-09-28 10:45:45.374542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:10.789 [2024-09-28 10:45:45.374550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.789 [2024-09-28 10:45:45.374573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:10.789 [2024-09-28 10:45:45.374581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:10.789 [2024-09-28 10:45:45.374590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.789 [2024-09-28 10:45:45.374598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:10.789 [2024-09-28 10:45:45.374606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:10.789 [2024-09-28 10:45:45.374614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.789 [2024-09-28 10:45:45.374622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:10.789 [2024-09-28 10:45:45.374630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:10.789 [2024-09-28 10:45:45.374638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.789 [2024-09-28 10:45:45.374654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:10.789 [2024-09-28 10:45:45.374662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:10.789 [2024-09-28 10:45:45.374670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.789 [2024-09-28 10:45:45.374678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:10.789 [2024-09-28 10:45:45.374686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:10.789 [2024-09-28 10:45:45.374694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.789 [2024-09-28 10:45:45.374702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:10.789 [2024-09-28 10:45:45.374709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:10.789 [2024-09-28 10:45:45.374717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:10.789 [2024-09-28 10:45:45.374726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:10.789 [2024-09-28 10:45:45.374733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:10.789 [2024-09-28 10:45:45.374740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:10.790 [2024-09-28 10:45:45.374748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:10.790 [2024-09-28 10:45:45.374756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:10.790 [2024-09-28 10:45:45.374764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.790 [2024-09-28 10:45:45.374771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:10.790 [2024-09-28 10:45:45.374783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:10.790 [2024-09-28 10:45:45.374796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.790 [2024-09-28 10:45:45.374805] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:10.790 [2024-09-28 10:45:45.374815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:10.790 [2024-09-28 10:45:45.374826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:10.790 [2024-09-28 10:45:45.374836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.790 [2024-09-28 10:45:45.374845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:10.790 [2024-09-28 10:45:45.374853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:10.790 [2024-09-28 10:45:45.374861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:10.790 [2024-09-28 10:45:45.374869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:10.790 [2024-09-28 10:45:45.374877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:10.790 [2024-09-28 10:45:45.374886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:10.790 [2024-09-28 10:45:45.374897] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:10.790 [2024-09-28 10:45:45.374907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:10.790 [2024-09-28 10:45:45.374917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:10.790 [2024-09-28 10:45:45.374925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:10.790 [2024-09-28 10:45:45.374935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:10.790 [2024-09-28 10:45:45.374942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:10.790 [2024-09-28 10:45:45.374949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:10.790 [2024-09-28 10:45:45.375221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:10.790 [2024-09-28 10:45:45.375279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:10.790 [2024-09-28 10:45:45.375310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:10.790 [2024-09-28 10:45:45.375341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:10.790 [2024-09-28 10:45:45.375369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:10.790 [2024-09-28 10:45:45.375399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:10.790 [2024-09-28 10:45:45.375427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:10.790 [2024-09-28 10:45:45.375456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:10.790 [2024-09-28 10:45:45.375485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:10.790 [2024-09-28 10:45:45.375513] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:10.790 [2024-09-28 10:45:45.375552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:10.790 [2024-09-28 10:45:45.375586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:10.790 [2024-09-28 10:45:45.375616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:10.790 [2024-09-28 10:45:45.375649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:10.790 [2024-09-28 10:45:45.375679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:10.790 [2024-09-28 10:45:45.375709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.375729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:10.790 [2024-09-28 10:45:45.375751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:25:10.790 [2024-09-28 10:45:45.375770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.399527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.399739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:10.790 [2024-09-28 10:45:45.399844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.663 ms 00:25:10.790 [2024-09-28 10:45:45.399869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.400003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.400096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:10.790 [2024-09-28 10:45:45.400182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:10.790 [2024-09-28 10:45:45.400211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.412257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.412408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.790 [2024-09-28 10:45:45.412462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.939 ms 00:25:10.790 [2024-09-28 10:45:45.412486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.412535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.412557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.790 [2024-09-28 10:45:45.412583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:10.790 [2024-09-28 10:45:45.412605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.413151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.413280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.790 [2024-09-28 10:45:45.413296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:25:10.790 [2024-09-28 10:45:45.413305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.413451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.413462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.790 [2024-09-28 10:45:45.413471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:25:10.790 [2024-09-28 10:45:45.413480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.419951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.420043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.790 [2024-09-28 10:45:45.420054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.449 ms 00:25:10.790 [2024-09-28 10:45:45.420063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.423697] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:10.790 [2024-09-28 10:45:45.423745] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:10.790 [2024-09-28 10:45:45.423758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.423767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:10.790 [2024-09-28 10:45:45.423784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.603 ms 00:25:10.790 [2024-09-28 10:45:45.423792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.439365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.439418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:10.790 [2024-09-28 10:45:45.439431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.523 ms 00:25:10.790 [2024-09-28 10:45:45.439439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.442229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.442415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:10.790 [2024-09-28 10:45:45.442435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.737 ms 00:25:10.790 [2024-09-28 10:45:45.442443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.445157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.445202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:10.790 [2024-09-28 10:45:45.445212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:25:10.790 [2024-09-28 10:45:45.445230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.445573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.445587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:10.790 [2024-09-28 10:45:45.445597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:25:10.790 [2024-09-28 10:45:45.445604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.468836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.469052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:10.790 [2024-09-28 10:45:45.469130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.208 ms 00:25:10.790 [2024-09-28 10:45:45.469157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.477376] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:10.790 [2024-09-28 10:45:45.480595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-09-28 10:45:45.480731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:10.790 [2024-09-28 10:45:45.480791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.385 ms 00:25:10.790 [2024-09-28 10:45:45.480816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-09-28 10:45:45.480909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.791 [2024-09-28 10:45:45.480935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:10.791 [2024-09-28 10:45:45.480957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:10.791 [2024-09-28 10:45:45.480995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.791 [2024-09-28 10:45:45.481736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.791 [2024-09-28 10:45:45.481867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:10.791 [2024-09-28 10:45:45.481927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:25:10.791 [2024-09-28 10:45:45.481952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.791 [2024-09-28 10:45:45.482021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.791 [2024-09-28 10:45:45.482045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:10.791 [2024-09-28 10:45:45.482071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:10.791 [2024-09-28 10:45:45.482093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.791 [2024-09-28 10:45:45.482145] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:10.791 [2024-09-28 10:45:45.482162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.791 [2024-09-28 10:45:45.482170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:10.791 [2024-09-28 10:45:45.482182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:10.791 [2024-09-28 10:45:45.482193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.791 [2024-09-28 10:45:45.486890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.791 [2024-09-28 10:45:45.486936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:10.791 [2024-09-28 10:45:45.486947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.676 ms 00:25:10.791 [2024-09-28 10:45:45.486955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.791 [2024-09-28 10:45:45.487063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.791 [2024-09-28 10:45:45.487074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:10.791 [2024-09-28 10:45:45.487084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:10.791 [2024-09-28 10:45:45.487092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.791 [2024-09-28 10:45:45.488197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.171 ms, result 0 00:26:17.559  Copying: 18/1024 [MB] (18 MBps) Copying: 32/1024 [MB] (13 MBps) Copying: 42/1024 [MB] (10 MBps) Copying: 54/1024 [MB] (11 MBps) Copying: 65/1024 [MB] (10 MBps) Copying: 79/1024 [MB] (13 MBps) Copying: 93/1024 [MB] (14 MBps) Copying: 106/1024 [MB] (13 MBps) Copying: 121/1024 [MB] (14 MBps) Copying: 136/1024 [MB] (14 MBps) Copying: 152/1024 [MB] (15 MBps) Copying: 163/1024 [MB] (11 MBps) Copying: 175/1024 [MB] (11 MBps) Copying: 189/1024 [MB] (13 MBps) Copying: 201/1024 [MB] (12 MBps) Copying: 215/1024 [MB] (13 MBps) Copying: 226/1024 [MB] (11 MBps) Copying: 245/1024 [MB] (18 MBps) Copying: 271/1024 [MB] (26 MBps) Copying: 289/1024 [MB] (17 MBps) Copying: 306/1024 [MB] (16 MBps) Copying: 320/1024 [MB] (13 MBps) Copying: 333/1024 [MB] (12 MBps) Copying: 350/1024 [MB] (17 MBps) Copying: 367/1024 [MB] (17 MBps) Copying: 383/1024 [MB] (16 MBps) Copying: 400/1024 [MB] (16 MBps) Copying: 412/1024 [MB] (12 MBps) Copying: 429/1024 [MB] (16 MBps) Copying: 442/1024 [MB] (12 MBps) Copying: 460/1024 [MB] (18 MBps) Copying: 481/1024 [MB] (21 MBps) Copying: 494/1024 [MB] (13 MBps) Copying: 518/1024 [MB] (23 MBps) Copying: 536/1024 [MB] (18 MBps) Copying: 557/1024 [MB] (20 MBps) Copying: 578/1024 [MB] (20 MBps) Copying: 596/1024 [MB] (18 MBps) Copying: 612/1024 [MB] (15 MBps) Copying: 627/1024 [MB] (15 MBps) Copying: 639/1024 [MB] (12 MBps) Copying: 652/1024 [MB] (12 MBps) Copying: 671/1024 [MB] (18 MBps) Copying: 681/1024 [MB] (10 MBps) Copying: 692/1024 [MB] (10 MBps) Copying: 706/1024 [MB] (14 MBps) Copying: 726/1024 [MB] (20 MBps) Copying: 741/1024 [MB] (15 MBps) Copying: 759/1024 [MB] (17 MBps) Copying: 776/1024 [MB] (16 MBps) Copying: 789/1024 [MB] (12 MBps) Copying: 804/1024 [MB] (15 MBps) Copying: 828/1024 [MB] (23 MBps) Copying: 845/1024 [MB] (17 MBps) Copying: 863/1024 [MB] (17 MBps) Copying: 879/1024 [MB] (15 MBps) Copying: 890/1024 [MB] (10 MBps) Copying: 903/1024 [MB] (13 MBps) Copying: 917/1024 [MB] (14 MBps) Copying: 928/1024 [MB] (10 MBps) Copying: 939/1024 [MB] (10 MBps) Copying: 953/1024 [MB] (14 MBps) Copying: 978/1024 [MB] (25 MBps) Copying: 989/1024 [MB] (10 MBps) Copying: 999/1024 [MB] (10 MBps) Copying: 1015/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps)[2024-09-28 10:46:52.220983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.559 [2024-09-28 10:46:52.221057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:17.559 [2024-09-28 10:46:52.221506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:17.559 [2024-09-28 10:46:52.221532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.559 [2024-09-28 10:46:52.221566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:17.559 [2024-09-28 10:46:52.222422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.559 [2024-09-28 10:46:52.222462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:17.559 [2024-09-28 10:46:52.222496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:26:17.559 [2024-09-28 10:46:52.222506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.222740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.560 [2024-09-28 10:46:52.222761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:17.560 [2024-09-28 10:46:52.222771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:26:17.560 [2024-09-28 10:46:52.222782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.226261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.560 [2024-09-28 10:46:52.226288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:17.560 [2024-09-28 10:46:52.226299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.454 ms 00:26:17.560 [2024-09-28 10:46:52.226307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.232615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.560 [2024-09-28 10:46:52.232661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:17.560 [2024-09-28 10:46:52.232673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.280 ms 00:26:17.560 [2024-09-28 10:46:52.232682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.235932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.560 [2024-09-28 10:46:52.236166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:17.560 [2024-09-28 10:46:52.236188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.178 ms 00:26:17.560 [2024-09-28 10:46:52.236195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.241496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.560 [2024-09-28 10:46:52.241554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:17.560 [2024-09-28 10:46:52.241566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.164 ms 00:26:17.560 [2024-09-28 10:46:52.241574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.246779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.560 [2024-09-28 10:46:52.246848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:17.560 [2024-09-28 10:46:52.246861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.153 ms 00:26:17.560 [2024-09-28 10:46:52.246869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.250288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.560 [2024-09-28 10:46:52.250486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:17.560 [2024-09-28 10:46:52.250506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.389 ms 00:26:17.560 [2024-09-28 10:46:52.250515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.253370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.560 [2024-09-28 10:46:52.253413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:17.560 [2024-09-28 10:46:52.253423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:26:17.560 [2024-09-28 10:46:52.253431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.256375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.560 [2024-09-28 10:46:52.256470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:17.560 [2024-09-28 10:46:52.256488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.430 ms 00:26:17.560 [2024-09-28 10:46:52.256500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.258832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.560 [2024-09-28 10:46:52.258889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:17.560 [2024-09-28 10:46:52.258903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.237 ms 00:26:17.560 [2024-09-28 10:46:52.258914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.560 [2024-09-28 10:46:52.258991] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:17.560 [2024-09-28 10:46:52.259016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:17.560 [2024-09-28 10:46:52.259034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:17.560 [2024-09-28 10:46:52.259049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:17.560 [2024-09-28 10:46:52.259754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.259991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:17.561 [2024-09-28 10:46:52.260330] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:17.561 [2024-09-28 10:46:52.260355] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e3420ac-c58b-4945-9d91-c714566e6f15 00:26:17.561 [2024-09-28 10:46:52.260368] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:17.561 [2024-09-28 10:46:52.260380] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:17.561 [2024-09-28 10:46:52.260393] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:17.561 [2024-09-28 10:46:52.260405] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:17.561 [2024-09-28 10:46:52.260417] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:17.561 [2024-09-28 10:46:52.260429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:17.561 [2024-09-28 10:46:52.260447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:17.561 [2024-09-28 10:46:52.260457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:17.561 [2024-09-28 10:46:52.260468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:17.561 [2024-09-28 10:46:52.260479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.561 [2024-09-28 10:46:52.260498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:17.561 [2024-09-28 10:46:52.260511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:26:17.561 [2024-09-28 10:46:52.260522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.263114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.561 [2024-09-28 10:46:52.263153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:17.561 [2024-09-28 10:46:52.263171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:26:17.561 [2024-09-28 10:46:52.263184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.263329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.561 [2024-09-28 10:46:52.263352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:17.561 [2024-09-28 10:46:52.263366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:26:17.561 [2024-09-28 10:46:52.263379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.270501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.561 [2024-09-28 10:46:52.270545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.561 [2024-09-28 10:46:52.270556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.561 [2024-09-28 10:46:52.270564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.270628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.561 [2024-09-28 10:46:52.270642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.561 [2024-09-28 10:46:52.270651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.561 [2024-09-28 10:46:52.270658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.270745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.561 [2024-09-28 10:46:52.270761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.561 [2024-09-28 10:46:52.270770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.561 [2024-09-28 10:46:52.270778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.270797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.561 [2024-09-28 10:46:52.270805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.561 [2024-09-28 10:46:52.270813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.561 [2024-09-28 10:46:52.270824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.284297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.561 [2024-09-28 10:46:52.284471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.561 [2024-09-28 10:46:52.284490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.561 [2024-09-28 10:46:52.284509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.294744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.561 [2024-09-28 10:46:52.294801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.561 [2024-09-28 10:46:52.294813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.561 [2024-09-28 10:46:52.294821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.294877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.561 [2024-09-28 10:46:52.294887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.561 [2024-09-28 10:46:52.294901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.561 [2024-09-28 10:46:52.294913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.294948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.561 [2024-09-28 10:46:52.294991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.561 [2024-09-28 10:46:52.295000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.561 [2024-09-28 10:46:52.295008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.561 [2024-09-28 10:46:52.295084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.561 [2024-09-28 10:46:52.295095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.561 [2024-09-28 10:46:52.295104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.561 [2024-09-28 10:46:52.295112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.562 [2024-09-28 10:46:52.295170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.562 [2024-09-28 10:46:52.295182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:17.562 [2024-09-28 10:46:52.295194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.562 [2024-09-28 10:46:52.295202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.562 [2024-09-28 10:46:52.295243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.562 [2024-09-28 10:46:52.295254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.562 [2024-09-28 10:46:52.295262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.562 [2024-09-28 10:46:52.295270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.562 [2024-09-28 10:46:52.295318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.562 [2024-09-28 10:46:52.295334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.562 [2024-09-28 10:46:52.295343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.562 [2024-09-28 10:46:52.295351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.562 [2024-09-28 10:46:52.295484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.484 ms, result 0 00:26:17.822 00:26:17.822 00:26:17.822 10:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:20.361 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:20.361 10:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:20.361 10:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:20.361 10:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:20.361 10:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:20.361 10:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:20.361 10:46:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:20.361 10:46:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:20.361 10:46:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 90135 00:26:20.361 10:46:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 90135 ']' 00:26:20.361 Process with pid 90135 is not found 00:26:20.361 10:46:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 90135 00:26:20.361 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90135) - No such process 00:26:20.361 10:46:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 90135 is not found' 00:26:20.361 10:46:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:20.622 Remove shared memory files 00:26:20.622 10:46:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:20.622 10:46:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:20.622 10:46:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:20.622 10:46:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:20.622 10:46:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:20.622 10:46:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:20.622 10:46:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:20.622 ************************************ 00:26:20.622 END TEST ftl_dirty_shutdown 00:26:20.622 ************************************ 00:26:20.622 00:26:20.622 real 4m37.215s 00:26:20.622 user 5m16.517s 00:26:20.622 sys 0m30.069s 00:26:20.622 10:46:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:20.622 10:46:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:20.622 10:46:55 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:20.622 10:46:55 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:20.622 10:46:55 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:20.622 10:46:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:20.622 ************************************ 00:26:20.622 START TEST ftl_upgrade_shutdown 00:26:20.622 ************************************ 00:26:20.622 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:20.622 * Looking for test storage... 00:26:20.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:20.622 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:20.622 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:20.622 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:20.883 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:20.883 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.883 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.883 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.883 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.884 --rc genhtml_branch_coverage=1 00:26:20.884 --rc genhtml_function_coverage=1 00:26:20.884 --rc genhtml_legend=1 00:26:20.884 --rc geninfo_all_blocks=1 00:26:20.884 --rc geninfo_unexecuted_blocks=1 00:26:20.884 00:26:20.884 ' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.884 --rc genhtml_branch_coverage=1 00:26:20.884 --rc genhtml_function_coverage=1 00:26:20.884 --rc genhtml_legend=1 00:26:20.884 --rc geninfo_all_blocks=1 00:26:20.884 --rc geninfo_unexecuted_blocks=1 00:26:20.884 00:26:20.884 ' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.884 --rc genhtml_branch_coverage=1 00:26:20.884 --rc genhtml_function_coverage=1 00:26:20.884 --rc genhtml_legend=1 00:26:20.884 --rc geninfo_all_blocks=1 00:26:20.884 --rc geninfo_unexecuted_blocks=1 00:26:20.884 00:26:20.884 ' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.884 --rc genhtml_branch_coverage=1 00:26:20.884 --rc genhtml_function_coverage=1 00:26:20.884 --rc genhtml_legend=1 00:26:20.884 --rc geninfo_all_blocks=1 00:26:20.884 --rc geninfo_unexecuted_blocks=1 00:26:20.884 00:26:20.884 ' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93119 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93119 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93119 ']' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:20.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:20.884 10:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:20.884 [2024-09-28 10:46:55.541298] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:26:20.884 [2024-09-28 10:46:55.541587] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93119 ] 00:26:21.147 [2024-09-28 10:46:55.672252] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:21.147 [2024-09-28 10:46:55.689738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.147 [2024-09-28 10:46:55.741143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:21.722 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:21.723 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:21.983 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:21.983 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:21.983 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:21.983 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:21.983 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:21.983 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:21.983 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:21.983 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:22.245 { 00:26:22.245 "name": "basen1", 00:26:22.245 "aliases": [ 00:26:22.245 "11ae70fd-6b7f-43f5-9f5e-b7b36429ea1a" 00:26:22.245 ], 00:26:22.245 "product_name": "NVMe disk", 00:26:22.245 "block_size": 4096, 00:26:22.245 "num_blocks": 1310720, 00:26:22.245 "uuid": "11ae70fd-6b7f-43f5-9f5e-b7b36429ea1a", 00:26:22.245 "numa_id": -1, 00:26:22.245 "assigned_rate_limits": { 00:26:22.245 "rw_ios_per_sec": 0, 00:26:22.245 "rw_mbytes_per_sec": 0, 00:26:22.245 "r_mbytes_per_sec": 0, 00:26:22.245 "w_mbytes_per_sec": 0 00:26:22.245 }, 00:26:22.245 "claimed": true, 00:26:22.245 "claim_type": "read_many_write_one", 00:26:22.245 "zoned": false, 00:26:22.245 "supported_io_types": { 00:26:22.245 "read": true, 00:26:22.245 "write": true, 00:26:22.245 "unmap": true, 00:26:22.245 "flush": true, 00:26:22.245 "reset": true, 00:26:22.245 "nvme_admin": true, 00:26:22.245 "nvme_io": true, 00:26:22.245 "nvme_io_md": false, 00:26:22.245 "write_zeroes": true, 00:26:22.245 "zcopy": false, 00:26:22.245 "get_zone_info": false, 00:26:22.245 "zone_management": false, 00:26:22.245 "zone_append": false, 00:26:22.245 "compare": true, 00:26:22.245 "compare_and_write": false, 00:26:22.245 "abort": true, 00:26:22.245 "seek_hole": false, 00:26:22.245 "seek_data": false, 00:26:22.245 "copy": true, 00:26:22.245 "nvme_iov_md": false 00:26:22.245 }, 00:26:22.245 "driver_specific": { 00:26:22.245 "nvme": [ 00:26:22.245 { 00:26:22.245 "pci_address": "0000:00:11.0", 00:26:22.245 "trid": { 00:26:22.245 "trtype": "PCIe", 00:26:22.245 "traddr": "0000:00:11.0" 00:26:22.245 }, 00:26:22.245 "ctrlr_data": { 00:26:22.245 "cntlid": 0, 00:26:22.245 "vendor_id": "0x1b36", 00:26:22.245 "model_number": "QEMU NVMe Ctrl", 00:26:22.245 "serial_number": "12341", 00:26:22.245 "firmware_revision": "8.0.0", 00:26:22.245 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:22.245 "oacs": { 00:26:22.245 "security": 0, 00:26:22.245 "format": 1, 00:26:22.245 "firmware": 0, 00:26:22.245 "ns_manage": 1 00:26:22.245 }, 00:26:22.245 "multi_ctrlr": false, 00:26:22.245 "ana_reporting": false 00:26:22.245 }, 00:26:22.245 "vs": { 00:26:22.245 "nvme_version": "1.4" 00:26:22.245 }, 00:26:22.245 "ns_data": { 00:26:22.245 "id": 1, 00:26:22.245 "can_share": false 00:26:22.245 } 00:26:22.245 } 00:26:22.245 ], 00:26:22.245 "mp_policy": "active_passive" 00:26:22.245 } 00:26:22.245 } 00:26:22.245 ]' 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:22.245 10:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:22.506 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=70a81290-b884-459c-a83b-aeb626675223 00:26:22.506 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:22.506 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 70a81290-b884-459c-a83b-aeb626675223 00:26:22.766 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:23.028 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=0f0f172f-9f11-401b-8404-80c3aaa94c5f 00:26:23.028 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0f0f172f-9f11-401b-8404-80c3aaa94c5f 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a36ead91-a48b-43cf-87ea-947f6ab6db9f 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a36ead91-a48b-43cf-87ea-947f6ab6db9f ]] 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a36ead91-a48b-43cf-87ea-947f6ab6db9f 5120 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a36ead91-a48b-43cf-87ea-947f6ab6db9f 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a36ead91-a48b-43cf-87ea-947f6ab6db9f 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a36ead91-a48b-43cf-87ea-947f6ab6db9f 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:23.289 10:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a36ead91-a48b-43cf-87ea-947f6ab6db9f 00:26:23.549 10:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:23.549 { 00:26:23.549 "name": "a36ead91-a48b-43cf-87ea-947f6ab6db9f", 00:26:23.549 "aliases": [ 00:26:23.549 "lvs/basen1p0" 00:26:23.549 ], 00:26:23.549 "product_name": "Logical Volume", 00:26:23.549 "block_size": 4096, 00:26:23.549 "num_blocks": 5242880, 00:26:23.549 "uuid": "a36ead91-a48b-43cf-87ea-947f6ab6db9f", 00:26:23.549 "assigned_rate_limits": { 00:26:23.549 "rw_ios_per_sec": 0, 00:26:23.549 "rw_mbytes_per_sec": 0, 00:26:23.549 "r_mbytes_per_sec": 0, 00:26:23.549 "w_mbytes_per_sec": 0 00:26:23.549 }, 00:26:23.549 "claimed": false, 00:26:23.549 "zoned": false, 00:26:23.549 "supported_io_types": { 00:26:23.549 "read": true, 00:26:23.549 "write": true, 00:26:23.549 "unmap": true, 00:26:23.549 "flush": false, 00:26:23.549 "reset": true, 00:26:23.549 "nvme_admin": false, 00:26:23.549 "nvme_io": false, 00:26:23.549 "nvme_io_md": false, 00:26:23.549 "write_zeroes": true, 00:26:23.549 "zcopy": false, 00:26:23.549 "get_zone_info": false, 00:26:23.549 "zone_management": false, 00:26:23.549 "zone_append": false, 00:26:23.549 "compare": false, 00:26:23.549 "compare_and_write": false, 00:26:23.549 "abort": false, 00:26:23.549 "seek_hole": true, 00:26:23.549 "seek_data": true, 00:26:23.549 "copy": false, 00:26:23.549 "nvme_iov_md": false 00:26:23.549 }, 00:26:23.549 "driver_specific": { 00:26:23.549 "lvol": { 00:26:23.549 "lvol_store_uuid": "0f0f172f-9f11-401b-8404-80c3aaa94c5f", 00:26:23.549 "base_bdev": "basen1", 00:26:23.549 "thin_provision": true, 00:26:23.549 "num_allocated_clusters": 0, 00:26:23.549 "snapshot": false, 00:26:23.549 "clone": false, 00:26:23.549 "esnap_clone": false 00:26:23.549 } 00:26:23.549 } 00:26:23.549 } 00:26:23.549 ]' 00:26:23.549 10:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:23.549 10:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:23.549 10:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:23.549 10:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:23.549 10:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:23.549 10:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:23.549 10:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:23.549 10:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:23.549 10:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:23.809 10:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:23.809 10:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:23.809 10:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:24.071 10:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:24.071 10:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:24.071 10:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a36ead91-a48b-43cf-87ea-947f6ab6db9f -c cachen1p0 --l2p_dram_limit 2 00:26:24.071 [2024-09-28 10:46:58.823357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.823614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:24.071 [2024-09-28 10:46:58.823646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:24.071 [2024-09-28 10:46:58.823656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.823736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.823747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:24.071 [2024-09-28 10:46:58.823762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:24.071 [2024-09-28 10:46:58.823774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.823808] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:24.071 [2024-09-28 10:46:58.824124] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:24.071 [2024-09-28 10:46:58.824147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.824159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:24.071 [2024-09-28 10:46:58.824173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.348 ms 00:26:24.071 [2024-09-28 10:46:58.824183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.824260] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 129af541-3381-49a7-a48a-35d23a66a43f 00:26:24.071 [2024-09-28 10:46:58.826048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.826098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:24.071 [2024-09-28 10:46:58.826110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:24.071 [2024-09-28 10:46:58.826126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.835172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.835224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:24.071 [2024-09-28 10:46:58.835243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.996 ms 00:26:24.071 [2024-09-28 10:46:58.835256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.835311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.835324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:24.071 [2024-09-28 10:46:58.835336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:24.071 [2024-09-28 10:46:58.835350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.835413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.835428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:24.071 [2024-09-28 10:46:58.835437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:24.071 [2024-09-28 10:46:58.835446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.835471] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:24.071 [2024-09-28 10:46:58.837825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.838040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:24.071 [2024-09-28 10:46:58.838065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.357 ms 00:26:24.071 [2024-09-28 10:46:58.838080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.838120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.838130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:24.071 [2024-09-28 10:46:58.838145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:24.071 [2024-09-28 10:46:58.838154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.838175] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:24.071 [2024-09-28 10:46:58.838324] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:24.071 [2024-09-28 10:46:58.838340] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:24.071 [2024-09-28 10:46:58.838351] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:24.071 [2024-09-28 10:46:58.838364] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:24.071 [2024-09-28 10:46:58.838377] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:24.071 [2024-09-28 10:46:58.838387] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:24.071 [2024-09-28 10:46:58.838396] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:24.071 [2024-09-28 10:46:58.838409] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:24.071 [2024-09-28 10:46:58.838419] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:24.071 [2024-09-28 10:46:58.838437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.838445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:24.071 [2024-09-28 10:46:58.838456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:26:24.071 [2024-09-28 10:46:58.838463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.838568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.071 [2024-09-28 10:46:58.838576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:24.071 [2024-09-28 10:46:58.838586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:26:24.071 [2024-09-28 10:46:58.838593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.071 [2024-09-28 10:46:58.838720] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:24.072 [2024-09-28 10:46:58.838731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:24.072 [2024-09-28 10:46:58.838741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:24.072 [2024-09-28 10:46:58.838749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.838761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:24.072 [2024-09-28 10:46:58.838769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.838778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:24.072 [2024-09-28 10:46:58.838785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:24.072 [2024-09-28 10:46:58.838795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:24.072 [2024-09-28 10:46:58.838803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.838811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:24.072 [2024-09-28 10:46:58.838818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:24.072 [2024-09-28 10:46:58.838830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.838840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:24.072 [2024-09-28 10:46:58.838851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:24.072 [2024-09-28 10:46:58.838857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.838866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:24.072 [2024-09-28 10:46:58.838873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:24.072 [2024-09-28 10:46:58.838882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.838889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:24.072 [2024-09-28 10:46:58.838898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:24.072 [2024-09-28 10:46:58.838904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:24.072 [2024-09-28 10:46:58.838913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:24.072 [2024-09-28 10:46:58.838921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:24.072 [2024-09-28 10:46:58.838930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:24.072 [2024-09-28 10:46:58.838937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:24.072 [2024-09-28 10:46:58.838946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:24.072 [2024-09-28 10:46:58.838952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:24.072 [2024-09-28 10:46:58.838979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:24.072 [2024-09-28 10:46:58.838987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:24.072 [2024-09-28 10:46:58.838995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:24.072 [2024-09-28 10:46:58.839002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:24.072 [2024-09-28 10:46:58.839011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:24.072 [2024-09-28 10:46:58.839018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.839027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:24.072 [2024-09-28 10:46:58.839034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:24.072 [2024-09-28 10:46:58.839043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.839049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:24.072 [2024-09-28 10:46:58.839058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:24.072 [2024-09-28 10:46:58.839064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.839073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:24.072 [2024-09-28 10:46:58.839080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:24.072 [2024-09-28 10:46:58.839088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.839095] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:24.072 [2024-09-28 10:46:58.839107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:24.072 [2024-09-28 10:46:58.839116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:24.072 [2024-09-28 10:46:58.839126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.072 [2024-09-28 10:46:58.839134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:24.072 [2024-09-28 10:46:58.839144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:24.072 [2024-09-28 10:46:58.839151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:24.072 [2024-09-28 10:46:58.839160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:24.072 [2024-09-28 10:46:58.839166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:24.072 [2024-09-28 10:46:58.839175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:24.072 [2024-09-28 10:46:58.839187] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:24.072 [2024-09-28 10:46:58.839199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:24.072 [2024-09-28 10:46:58.839226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:24.072 [2024-09-28 10:46:58.839251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:24.072 [2024-09-28 10:46:58.839263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:24.072 [2024-09-28 10:46:58.839270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:24.072 [2024-09-28 10:46:58.839279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:24.072 [2024-09-28 10:46:58.839335] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:24.072 [2024-09-28 10:46:58.839349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:24.072 [2024-09-28 10:46:58.839367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:24.072 [2024-09-28 10:46:58.839374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:24.072 [2024-09-28 10:46:58.839383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:24.072 [2024-09-28 10:46:58.839392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.072 [2024-09-28 10:46:58.839404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:24.072 [2024-09-28 10:46:58.839413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.741 ms 00:26:24.072 [2024-09-28 10:46:58.839423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.072 [2024-09-28 10:46:58.839491] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:24.072 [2024-09-28 10:46:58.839506] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:28.338 [2024-09-28 10:47:02.608325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.608423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:28.338 [2024-09-28 10:47:02.608445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3768.820 ms 00:26:28.338 [2024-09-28 10:47:02.608457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.622011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.622063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:28.338 [2024-09-28 10:47:02.622077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.423 ms 00:26:28.338 [2024-09-28 10:47:02.622091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.622155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.622171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:28.338 [2024-09-28 10:47:02.622180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:28.338 [2024-09-28 10:47:02.622191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.633622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.633678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:28.338 [2024-09-28 10:47:02.633690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.384 ms 00:26:28.338 [2024-09-28 10:47:02.633701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.633739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.633750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:28.338 [2024-09-28 10:47:02.633765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:28.338 [2024-09-28 10:47:02.633779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.634367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.634395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:28.338 [2024-09-28 10:47:02.634406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:26:28.338 [2024-09-28 10:47:02.634421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.634485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.634499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:28.338 [2024-09-28 10:47:02.634512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:28.338 [2024-09-28 10:47:02.634522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.652608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.652850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:28.338 [2024-09-28 10:47:02.652876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.058 ms 00:26:28.338 [2024-09-28 10:47:02.652890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.664249] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:28.338 [2024-09-28 10:47:02.665496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.665539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:28.338 [2024-09-28 10:47:02.665553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.430 ms 00:26:28.338 [2024-09-28 10:47:02.665562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.685915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.685993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:28.338 [2024-09-28 10:47:02.686013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.316 ms 00:26:28.338 [2024-09-28 10:47:02.686025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.686130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.686142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:28.338 [2024-09-28 10:47:02.686156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:28.338 [2024-09-28 10:47:02.686164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.691286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.691469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:28.338 [2024-09-28 10:47:02.691495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.065 ms 00:26:28.338 [2024-09-28 10:47:02.691504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.696780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.696830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:28.338 [2024-09-28 10:47:02.696844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.153 ms 00:26:28.338 [2024-09-28 10:47:02.696851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.697231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.697243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:28.338 [2024-09-28 10:47:02.697258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.331 ms 00:26:28.338 [2024-09-28 10:47:02.697266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.742926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.743144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:28.338 [2024-09-28 10:47:02.743174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.613 ms 00:26:28.338 [2024-09-28 10:47:02.743187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.750481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.750658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:28.338 [2024-09-28 10:47:02.750683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.145 ms 00:26:28.338 [2024-09-28 10:47:02.750691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.757136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.757303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:28.338 [2024-09-28 10:47:02.757326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.397 ms 00:26:28.338 [2024-09-28 10:47:02.757334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.763690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.763741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:28.338 [2024-09-28 10:47:02.763758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.307 ms 00:26:28.338 [2024-09-28 10:47:02.763765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.763822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.763832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:28.338 [2024-09-28 10:47:02.763843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:28.338 [2024-09-28 10:47:02.763851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.763948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.338 [2024-09-28 10:47:02.764014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:28.338 [2024-09-28 10:47:02.764027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:28.338 [2024-09-28 10:47:02.764035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.338 [2024-09-28 10:47:02.765195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3941.344 ms, result 0 00:26:28.338 { 00:26:28.338 "name": "ftl", 00:26:28.338 "uuid": "129af541-3381-49a7-a48a-35d23a66a43f" 00:26:28.338 } 00:26:28.338 10:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:28.338 [2024-09-28 10:47:02.988303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.338 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:28.599 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:28.860 [2024-09-28 10:47:03.408803] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:28.860 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:28.860 [2024-09-28 10:47:03.633200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:29.122 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:29.383 Fill FTL, iteration 1 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93246 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93246 /var/tmp/spdk.tgt.sock 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93246 ']' 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:29.383 10:47:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:29.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:29.383 10:47:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:29.383 10:47:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:29.383 [2024-09-28 10:47:04.078450] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:26:29.383 [2024-09-28 10:47:04.078797] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93246 ] 00:26:29.645 [2024-09-28 10:47:04.212650] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:29.645 [2024-09-28 10:47:04.231052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.645 [2024-09-28 10:47:04.303810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.216 10:47:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:30.216 10:47:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:30.216 10:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:30.477 ftln1 00:26:30.477 10:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:30.477 10:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93246 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93246 ']' 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93246 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93246 00:26:30.739 killing process with pid 93246 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93246' 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93246 00:26:30.739 10:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93246 00:26:31.316 10:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:31.316 10:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:31.316 [2024-09-28 10:47:06.030392] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:26:31.316 [2024-09-28 10:47:06.030507] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93278 ] 00:26:31.574 [2024-09-28 10:47:06.158427] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:31.574 [2024-09-28 10:47:06.179693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.574 [2024-09-28 10:47:06.220619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.618  Copying: 185/1024 [MB] (185 MBps) Copying: 416/1024 [MB] (231 MBps) Copying: 656/1024 [MB] (240 MBps) Copying: 891/1024 [MB] (235 MBps) Copying: 1024/1024 [MB] (average 225 MBps) 00:26:36.618 00:26:36.618 10:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:36.618 10:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:36.618 Calculate MD5 checksum, iteration 1 00:26:36.618 10:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:36.618 10:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:36.618 10:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:36.618 10:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:36.618 10:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:36.618 10:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:36.618 [2024-09-28 10:47:11.226704] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:26:36.618 [2024-09-28 10:47:11.226974] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93336 ] 00:26:36.618 [2024-09-28 10:47:11.355418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:36.618 [2024-09-28 10:47:11.373461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.880 [2024-09-28 10:47:11.423374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.786  Copying: 661/1024 [MB] (661 MBps) Copying: 1024/1024 [MB] (average 633 MBps) 00:26:38.786 00:26:38.786 10:47:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:38.786 10:47:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=452b44314752377756ede2bb4d28edcd 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:41.328 Fill FTL, iteration 2 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:41.328 10:47:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:41.328 [2024-09-28 10:47:15.639542] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:26:41.328 [2024-09-28 10:47:15.639650] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93387 ] 00:26:41.328 [2024-09-28 10:47:15.767304] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:41.328 [2024-09-28 10:47:15.785918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.328 [2024-09-28 10:47:15.827930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.119  Copying: 194/1024 [MB] (194 MBps) Copying: 412/1024 [MB] (218 MBps) Copying: 640/1024 [MB] (228 MBps) Copying: 878/1024 [MB] (238 MBps) Copying: 1024/1024 [MB] (average 219 MBps) 00:26:46.119 00:26:46.381 10:47:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:46.381 10:47:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:46.381 Calculate MD5 checksum, iteration 2 00:26:46.381 10:47:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:46.381 10:47:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:46.381 10:47:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:46.381 10:47:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:46.381 10:47:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:46.381 10:47:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:46.381 [2024-09-28 10:47:20.964700] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:26:46.381 [2024-09-28 10:47:20.964808] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93446 ] 00:26:46.381 [2024-09-28 10:47:21.093953] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:46.381 [2024-09-28 10:47:21.113407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.642 [2024-09-28 10:47:21.173756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.487  Copying: 657/1024 [MB] (657 MBps) Copying: 1024/1024 [MB] (average 654 MBps) 00:26:52.487 00:26:52.487 10:47:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:52.487 10:47:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:54.389 10:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:54.389 10:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1e48099ee2bd96e6ad3e83b44a336897 00:26:54.389 10:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:54.389 10:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:54.389 10:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:54.389 [2024-09-28 10:47:28.970404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-09-28 10:47:28.970451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:54.389 [2024-09-28 10:47:28.970462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:54.389 [2024-09-28 10:47:28.970468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-09-28 10:47:28.970490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-09-28 10:47:28.970499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:54.389 [2024-09-28 10:47:28.970505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:54.389 [2024-09-28 10:47:28.970511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-09-28 10:47:28.970526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-09-28 10:47:28.970533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:54.389 [2024-09-28 10:47:28.970539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:54.389 [2024-09-28 10:47:28.970547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-09-28 10:47:28.970596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.182 ms, result 0 00:26:54.389 true 00:26:54.389 10:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:54.389 { 00:26:54.389 "name": "ftl", 00:26:54.389 "properties": [ 00:26:54.389 { 00:26:54.389 "name": "superblock_version", 00:26:54.389 "value": 5, 00:26:54.389 "read-only": true 00:26:54.389 }, 00:26:54.389 { 00:26:54.389 "name": "base_device", 00:26:54.389 "bands": [ 00:26:54.389 { 00:26:54.389 "id": 0, 00:26:54.389 "state": "FREE", 00:26:54.389 "validity": 0.0 00:26:54.389 }, 00:26:54.389 { 00:26:54.389 "id": 1, 00:26:54.389 "state": "FREE", 00:26:54.389 "validity": 0.0 00:26:54.389 }, 00:26:54.389 { 00:26:54.389 "id": 2, 00:26:54.389 "state": "FREE", 00:26:54.389 "validity": 0.0 00:26:54.389 }, 00:26:54.389 { 00:26:54.389 "id": 3, 00:26:54.389 "state": "FREE", 00:26:54.389 "validity": 0.0 00:26:54.389 }, 00:26:54.390 { 00:26:54.390 "id": 4, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 5, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 6, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 7, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 8, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 9, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 10, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 11, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 12, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 13, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 14, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 15, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 16, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 17, 00:26:54.390 "state": "FREE", 00:26:54.390 "validity": 0.0 00:26:54.390 } 00:26:54.390 ], 00:26:54.390 "read-only": true 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "name": "cache_device", 00:26:54.390 "type": "bdev", 00:26:54.390 "chunks": [ 00:26:54.390 { 00:26:54.390 "id": 0, 00:26:54.390 "state": "INACTIVE", 00:26:54.390 "utilization": 0.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 1, 00:26:54.390 "state": "CLOSED", 00:26:54.390 "utilization": 1.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 2, 00:26:54.390 "state": "CLOSED", 00:26:54.390 "utilization": 1.0 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 3, 00:26:54.390 "state": "OPEN", 00:26:54.390 "utilization": 0.001953125 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "id": 4, 00:26:54.390 "state": "OPEN", 00:26:54.390 "utilization": 0.0 00:26:54.390 } 00:26:54.390 ], 00:26:54.390 "read-only": true 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "name": "verbose_mode", 00:26:54.390 "value": true, 00:26:54.390 "unit": "", 00:26:54.390 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:54.390 }, 00:26:54.390 { 00:26:54.390 "name": "prep_upgrade_on_shutdown", 00:26:54.390 "value": false, 00:26:54.390 "unit": "", 00:26:54.390 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:54.390 } 00:26:54.390 ] 00:26:54.390 } 00:26:54.390 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:54.649 [2024-09-28 10:47:29.274650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.649 [2024-09-28 10:47:29.274682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:54.649 [2024-09-28 10:47:29.274690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:54.649 [2024-09-28 10:47:29.274695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.649 [2024-09-28 10:47:29.274723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.649 [2024-09-28 10:47:29.274730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:54.649 [2024-09-28 10:47:29.274735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:54.649 [2024-09-28 10:47:29.274741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.649 [2024-09-28 10:47:29.274755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.649 [2024-09-28 10:47:29.274761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:54.649 [2024-09-28 10:47:29.274766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:54.649 [2024-09-28 10:47:29.274772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.649 [2024-09-28 10:47:29.274812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.152 ms, result 0 00:26:54.649 true 00:26:54.649 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:54.649 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:54.649 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:54.907 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:54.907 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:54.907 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:54.907 [2024-09-28 10:47:29.646993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.907 [2024-09-28 10:47:29.647096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:54.907 [2024-09-28 10:47:29.647138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:54.907 [2024-09-28 10:47:29.647155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.907 [2024-09-28 10:47:29.647184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.907 [2024-09-28 10:47:29.647200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:54.907 [2024-09-28 10:47:29.647215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:54.907 [2024-09-28 10:47:29.647229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.907 [2024-09-28 10:47:29.647252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.907 [2024-09-28 10:47:29.647268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:54.908 [2024-09-28 10:47:29.647283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:54.908 [2024-09-28 10:47:29.647329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.908 [2024-09-28 10:47:29.647384] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.378 ms, result 0 00:26:54.908 true 00:26:54.908 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:55.166 { 00:26:55.166 "name": "ftl", 00:26:55.166 "properties": [ 00:26:55.166 { 00:26:55.166 "name": "superblock_version", 00:26:55.166 "value": 5, 00:26:55.166 "read-only": true 00:26:55.166 }, 00:26:55.166 { 00:26:55.166 "name": "base_device", 00:26:55.166 "bands": [ 00:26:55.166 { 00:26:55.166 "id": 0, 00:26:55.166 "state": "FREE", 00:26:55.166 "validity": 0.0 00:26:55.166 }, 00:26:55.166 { 00:26:55.167 "id": 1, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 2, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 3, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 4, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 5, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 6, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 7, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 8, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 9, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 10, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 11, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 12, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 13, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 14, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 15, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 16, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 17, 00:26:55.167 "state": "FREE", 00:26:55.167 "validity": 0.0 00:26:55.167 } 00:26:55.167 ], 00:26:55.167 "read-only": true 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "name": "cache_device", 00:26:55.167 "type": "bdev", 00:26:55.167 "chunks": [ 00:26:55.167 { 00:26:55.167 "id": 0, 00:26:55.167 "state": "INACTIVE", 00:26:55.167 "utilization": 0.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 1, 00:26:55.167 "state": "CLOSED", 00:26:55.167 "utilization": 1.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 2, 00:26:55.167 "state": "CLOSED", 00:26:55.167 "utilization": 1.0 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 3, 00:26:55.167 "state": "OPEN", 00:26:55.167 "utilization": 0.001953125 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "id": 4, 00:26:55.167 "state": "OPEN", 00:26:55.167 "utilization": 0.0 00:26:55.167 } 00:26:55.167 ], 00:26:55.167 "read-only": true 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "name": "verbose_mode", 00:26:55.167 "value": true, 00:26:55.167 "unit": "", 00:26:55.167 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:55.167 }, 00:26:55.167 { 00:26:55.167 "name": "prep_upgrade_on_shutdown", 00:26:55.167 "value": true, 00:26:55.167 "unit": "", 00:26:55.167 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:55.167 } 00:26:55.167 ] 00:26:55.167 } 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93119 ]] 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93119 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93119 ']' 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93119 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93119 00:26:55.167 killing process with pid 93119 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93119' 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93119 00:26:55.167 10:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93119 00:26:55.425 [2024-09-28 10:47:29.969363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:55.425 [2024-09-28 10:47:29.973239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.425 [2024-09-28 10:47:29.973270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:55.425 [2024-09-28 10:47:29.973279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:55.425 [2024-09-28 10:47:29.973286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.425 [2024-09-28 10:47:29.973306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:55.425 [2024-09-28 10:47:29.973694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.425 [2024-09-28 10:47:29.973707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:55.425 [2024-09-28 10:47:29.973714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.379 ms 00:26:55.425 [2024-09-28 10:47:29.973723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.559427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.580 [2024-09-28 10:47:37.559485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:03.580 [2024-09-28 10:47:37.559501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7585.665 ms 00:27:03.580 [2024-09-28 10:47:37.559509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.560737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.580 [2024-09-28 10:47:37.560766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:03.580 [2024-09-28 10:47:37.560773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.216 ms 00:27:03.580 [2024-09-28 10:47:37.560779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.561637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.580 [2024-09-28 10:47:37.561654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:03.580 [2024-09-28 10:47:37.561667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.836 ms 00:27:03.580 [2024-09-28 10:47:37.561673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.563058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.580 [2024-09-28 10:47:37.563085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:03.580 [2024-09-28 10:47:37.563094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.359 ms 00:27:03.580 [2024-09-28 10:47:37.563100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.564912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.580 [2024-09-28 10:47:37.564942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:03.580 [2024-09-28 10:47:37.564950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.785 ms 00:27:03.580 [2024-09-28 10:47:37.564956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.565009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.580 [2024-09-28 10:47:37.565020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:03.580 [2024-09-28 10:47:37.565027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:03.580 [2024-09-28 10:47:37.565033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.566119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.580 [2024-09-28 10:47:37.566145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:03.580 [2024-09-28 10:47:37.566152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.070 ms 00:27:03.580 [2024-09-28 10:47:37.566157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.567094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.580 [2024-09-28 10:47:37.567119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:03.580 [2024-09-28 10:47:37.567126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 00:27:03.580 [2024-09-28 10:47:37.567132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.567920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.580 [2024-09-28 10:47:37.567954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:03.580 [2024-09-28 10:47:37.567975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.764 ms 00:27:03.580 [2024-09-28 10:47:37.567981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.568819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.580 [2024-09-28 10:47:37.568845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:03.580 [2024-09-28 10:47:37.568853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.791 ms 00:27:03.580 [2024-09-28 10:47:37.568858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.580 [2024-09-28 10:47:37.568881] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:03.580 [2024-09-28 10:47:37.568892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:03.580 [2024-09-28 10:47:37.568899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:03.580 [2024-09-28 10:47:37.568906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:03.580 [2024-09-28 10:47:37.568912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:03.580 [2024-09-28 10:47:37.568918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:03.580 [2024-09-28 10:47:37.568924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.568930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.568936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.568942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.568948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.568953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.568972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.568978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.568984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.568990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.568996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.569001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.569007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:03.581 [2024-09-28 10:47:37.569015] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:03.581 [2024-09-28 10:47:37.569022] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 129af541-3381-49a7-a48a-35d23a66a43f 00:27:03.581 [2024-09-28 10:47:37.569029] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:03.581 [2024-09-28 10:47:37.569035] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:03.581 [2024-09-28 10:47:37.569040] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:03.581 [2024-09-28 10:47:37.569046] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:03.581 [2024-09-28 10:47:37.569056] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:03.581 [2024-09-28 10:47:37.569062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:03.581 [2024-09-28 10:47:37.569068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:03.581 [2024-09-28 10:47:37.569073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:03.581 [2024-09-28 10:47:37.569079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:03.581 [2024-09-28 10:47:37.569085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.581 [2024-09-28 10:47:37.569091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:03.581 [2024-09-28 10:47:37.569101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:27:03.581 [2024-09-28 10:47:37.569106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.570531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.581 [2024-09-28 10:47:37.570616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:03.581 [2024-09-28 10:47:37.570669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.412 ms 00:27:03.581 [2024-09-28 10:47:37.570686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.570762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.581 [2024-09-28 10:47:37.570938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:03.581 [2024-09-28 10:47:37.570958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:27:03.581 [2024-09-28 10:47:37.570983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.575383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.575479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:03.581 [2024-09-28 10:47:37.575522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.575540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.575571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.575633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:03.581 [2024-09-28 10:47:37.575651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.575666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.575733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.575757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:03.581 [2024-09-28 10:47:37.575777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.575791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.575840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.575858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:03.581 [2024-09-28 10:47:37.575936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.575954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.583504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.583620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:03.581 [2024-09-28 10:47:37.583672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.583716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.589968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.590087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:03.581 [2024-09-28 10:47:37.590130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.590147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.590190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.590310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:03.581 [2024-09-28 10:47:37.590337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.590346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.590386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.590393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:03.581 [2024-09-28 10:47:37.590400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.590406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.590468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.590475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:03.581 [2024-09-28 10:47:37.590482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.590488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.590513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.590520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:03.581 [2024-09-28 10:47:37.590526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.590531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.590559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.590569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:03.581 [2024-09-28 10:47:37.590575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.590581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.590619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:03.581 [2024-09-28 10:47:37.590627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:03.581 [2024-09-28 10:47:37.590633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:03.581 [2024-09-28 10:47:37.590639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.581 [2024-09-28 10:47:37.590727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7617.448 ms, result 0 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93648 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93648 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93648 ']' 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:04.150 10:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:04.150 [2024-09-28 10:47:38.700872] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:27:04.150 [2024-09-28 10:47:38.700993] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93648 ] 00:27:04.150 [2024-09-28 10:47:38.826141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:04.150 [2024-09-28 10:47:38.844948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.150 [2024-09-28 10:47:38.874371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.407 [2024-09-28 10:47:39.122703] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:04.407 [2024-09-28 10:47:39.122908] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:04.667 [2024-09-28 10:47:39.260368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.260402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:04.667 [2024-09-28 10:47:39.260415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:04.667 [2024-09-28 10:47:39.260421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.260460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.260468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:04.667 [2024-09-28 10:47:39.260474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:04.667 [2024-09-28 10:47:39.260483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.260501] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:04.667 [2024-09-28 10:47:39.260673] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:04.667 [2024-09-28 10:47:39.260686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.260694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:04.667 [2024-09-28 10:47:39.260702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:27:04.667 [2024-09-28 10:47:39.260708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.261657] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:04.667 [2024-09-28 10:47:39.263684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.263714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:04.667 [2024-09-28 10:47:39.263726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.028 ms 00:27:04.667 [2024-09-28 10:47:39.263732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.263778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.263786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:04.667 [2024-09-28 10:47:39.263793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:04.667 [2024-09-28 10:47:39.263798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.268119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.268237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:04.667 [2024-09-28 10:47:39.268249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.287 ms 00:27:04.667 [2024-09-28 10:47:39.268260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.268291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.268298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:04.667 [2024-09-28 10:47:39.268307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:04.667 [2024-09-28 10:47:39.268314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.268353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.268360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:04.667 [2024-09-28 10:47:39.268370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:04.667 [2024-09-28 10:47:39.268376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.268393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:04.667 [2024-09-28 10:47:39.269545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.269571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:04.667 [2024-09-28 10:47:39.269578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.157 ms 00:27:04.667 [2024-09-28 10:47:39.269588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.269610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.269617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:04.667 [2024-09-28 10:47:39.269627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:04.667 [2024-09-28 10:47:39.269632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.269648] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:04.667 [2024-09-28 10:47:39.269662] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:04.667 [2024-09-28 10:47:39.269689] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:04.667 [2024-09-28 10:47:39.269705] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:04.667 [2024-09-28 10:47:39.269784] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:04.667 [2024-09-28 10:47:39.269794] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:04.667 [2024-09-28 10:47:39.269803] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:04.667 [2024-09-28 10:47:39.269814] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:04.667 [2024-09-28 10:47:39.269820] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:04.667 [2024-09-28 10:47:39.269826] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:04.667 [2024-09-28 10:47:39.269834] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:04.667 [2024-09-28 10:47:39.269840] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:04.667 [2024-09-28 10:47:39.269846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:04.667 [2024-09-28 10:47:39.269852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.269858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:04.667 [2024-09-28 10:47:39.269866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:27:04.667 [2024-09-28 10:47:39.269873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.269937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.667 [2024-09-28 10:47:39.269943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:04.667 [2024-09-28 10:47:39.269949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:04.667 [2024-09-28 10:47:39.269954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.667 [2024-09-28 10:47:39.270043] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:04.667 [2024-09-28 10:47:39.270053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:04.667 [2024-09-28 10:47:39.270060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:04.668 [2024-09-28 10:47:39.270066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:04.668 [2024-09-28 10:47:39.270078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:04.668 [2024-09-28 10:47:39.270088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:04.668 [2024-09-28 10:47:39.270094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:04.668 [2024-09-28 10:47:39.270099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:04.668 [2024-09-28 10:47:39.270113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:04.668 [2024-09-28 10:47:39.270119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:04.668 [2024-09-28 10:47:39.270129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:04.668 [2024-09-28 10:47:39.270133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:04.668 [2024-09-28 10:47:39.270148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:04.668 [2024-09-28 10:47:39.270153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:04.668 [2024-09-28 10:47:39.270167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:04.668 [2024-09-28 10:47:39.270172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.668 [2024-09-28 10:47:39.270177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:04.668 [2024-09-28 10:47:39.270185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:04.668 [2024-09-28 10:47:39.270190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.668 [2024-09-28 10:47:39.270195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:04.668 [2024-09-28 10:47:39.270200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:04.668 [2024-09-28 10:47:39.270205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.668 [2024-09-28 10:47:39.270209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:04.668 [2024-09-28 10:47:39.270214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:04.668 [2024-09-28 10:47:39.270219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.668 [2024-09-28 10:47:39.270223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:04.668 [2024-09-28 10:47:39.270228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:04.668 [2024-09-28 10:47:39.270234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:04.668 [2024-09-28 10:47:39.270244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:04.668 [2024-09-28 10:47:39.270249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:04.668 [2024-09-28 10:47:39.270259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:04.668 [2024-09-28 10:47:39.270273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:04.668 [2024-09-28 10:47:39.270279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270284] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:04.668 [2024-09-28 10:47:39.270290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:04.668 [2024-09-28 10:47:39.270295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:04.668 [2024-09-28 10:47:39.270300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.668 [2024-09-28 10:47:39.270306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:04.668 [2024-09-28 10:47:39.270311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:04.668 [2024-09-28 10:47:39.270317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:04.668 [2024-09-28 10:47:39.270323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:04.668 [2024-09-28 10:47:39.270328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:04.668 [2024-09-28 10:47:39.270332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:04.668 [2024-09-28 10:47:39.270338] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:04.668 [2024-09-28 10:47:39.270345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:04.668 [2024-09-28 10:47:39.270356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:04.668 [2024-09-28 10:47:39.270372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:04.668 [2024-09-28 10:47:39.270377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:04.668 [2024-09-28 10:47:39.270382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:04.668 [2024-09-28 10:47:39.270387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:04.668 [2024-09-28 10:47:39.270441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:04.668 [2024-09-28 10:47:39.270447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:04.668 [2024-09-28 10:47:39.270458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:04.668 [2024-09-28 10:47:39.270464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:04.668 [2024-09-28 10:47:39.270471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:04.668 [2024-09-28 10:47:39.270477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.668 [2024-09-28 10:47:39.270482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:04.668 [2024-09-28 10:47:39.270490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.486 ms 00:27:04.668 [2024-09-28 10:47:39.270495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.669 [2024-09-28 10:47:39.270527] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:04.669 [2024-09-28 10:47:39.270534] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:08.922 [2024-09-28 10:47:42.839044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.922 [2024-09-28 10:47:42.839104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:08.922 [2024-09-28 10:47:42.839119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3568.505 ms 00:27:08.922 [2024-09-28 10:47:42.839128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.922 [2024-09-28 10:47:42.847131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.922 [2024-09-28 10:47:42.847170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:08.922 [2024-09-28 10:47:42.847181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.919 ms 00:27:08.922 [2024-09-28 10:47:42.847190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.922 [2024-09-28 10:47:42.847231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.922 [2024-09-28 10:47:42.847240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:08.922 [2024-09-28 10:47:42.847248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:08.922 [2024-09-28 10:47:42.847262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.922 [2024-09-28 10:47:42.862430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.922 [2024-09-28 10:47:42.862480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:08.922 [2024-09-28 10:47:42.862495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.106 ms 00:27:08.922 [2024-09-28 10:47:42.862505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.922 [2024-09-28 10:47:42.862547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.922 [2024-09-28 10:47:42.862558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:08.922 [2024-09-28 10:47:42.862569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:08.922 [2024-09-28 10:47:42.862578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.922 [2024-09-28 10:47:42.863018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.922 [2024-09-28 10:47:42.863038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:08.922 [2024-09-28 10:47:42.863051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.341 ms 00:27:08.922 [2024-09-28 10:47:42.863062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.922 [2024-09-28 10:47:42.863119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.922 [2024-09-28 10:47:42.863132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:08.922 [2024-09-28 10:47:42.863151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:08.922 [2024-09-28 10:47:42.863162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.922 [2024-09-28 10:47:42.868940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.868991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:08.923 [2024-09-28 10:47:42.869009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.751 ms 00:27:08.923 [2024-09-28 10:47:42.869019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.871518] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:08.923 [2024-09-28 10:47:42.871559] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:08.923 [2024-09-28 10:47:42.871573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.871584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:08.923 [2024-09-28 10:47:42.871594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.456 ms 00:27:08.923 [2024-09-28 10:47:42.871603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.875860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.875893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:08.923 [2024-09-28 10:47:42.875907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.212 ms 00:27:08.923 [2024-09-28 10:47:42.875914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.877491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.877612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:08.923 [2024-09-28 10:47:42.877626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.539 ms 00:27:08.923 [2024-09-28 10:47:42.877633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.879196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.879217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:08.923 [2024-09-28 10:47:42.879225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.532 ms 00:27:08.923 [2024-09-28 10:47:42.879233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.879613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.879630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:08.923 [2024-09-28 10:47:42.879640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.245 ms 00:27:08.923 [2024-09-28 10:47:42.879647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.893677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.893721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:08.923 [2024-09-28 10:47:42.893733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.013 ms 00:27:08.923 [2024-09-28 10:47:42.893741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.901109] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:08.923 [2024-09-28 10:47:42.901742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.901772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:08.923 [2024-09-28 10:47:42.901785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.966 ms 00:27:08.923 [2024-09-28 10:47:42.901796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.901845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.901854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:08.923 [2024-09-28 10:47:42.901863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:08.923 [2024-09-28 10:47:42.901870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.901928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.901939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:08.923 [2024-09-28 10:47:42.901948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:08.923 [2024-09-28 10:47:42.901958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.901995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.902003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:08.923 [2024-09-28 10:47:42.902010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:08.923 [2024-09-28 10:47:42.902018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.902047] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:08.923 [2024-09-28 10:47:42.902061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.902068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:08.923 [2024-09-28 10:47:42.902076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:08.923 [2024-09-28 10:47:42.902083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.905452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.905484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:08.923 [2024-09-28 10:47:42.905501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.349 ms 00:27:08.923 [2024-09-28 10:47:42.905510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.905575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:42.905584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:08.923 [2024-09-28 10:47:42.905592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:08.923 [2024-09-28 10:47:42.905599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:42.906704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3645.930 ms, result 0 00:27:08.923 [2024-09-28 10:47:42.922470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.923 [2024-09-28 10:47:42.938443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:08.923 [2024-09-28 10:47:42.946559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:08.923 10:47:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.923 10:47:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:08.923 10:47:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:08.923 10:47:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:08.923 10:47:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:08.923 [2024-09-28 10:47:43.178672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:43.178727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:08.923 [2024-09-28 10:47:43.178741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:08.923 [2024-09-28 10:47:43.178750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:43.178773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:43.178782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:08.923 [2024-09-28 10:47:43.178796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:08.923 [2024-09-28 10:47:43.178804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:43.178827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.923 [2024-09-28 10:47:43.178836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:08.923 [2024-09-28 10:47:43.178845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:08.923 [2024-09-28 10:47:43.178853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.923 [2024-09-28 10:47:43.178915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.229 ms, result 0 00:27:08.923 true 00:27:08.923 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:08.923 { 00:27:08.923 "name": "ftl", 00:27:08.923 "properties": [ 00:27:08.923 { 00:27:08.923 "name": "superblock_version", 00:27:08.923 "value": 5, 00:27:08.923 "read-only": true 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "name": "base_device", 00:27:08.923 "bands": [ 00:27:08.923 { 00:27:08.923 "id": 0, 00:27:08.923 "state": "CLOSED", 00:27:08.923 "validity": 1.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 1, 00:27:08.923 "state": "CLOSED", 00:27:08.923 "validity": 1.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 2, 00:27:08.923 "state": "CLOSED", 00:27:08.923 "validity": 0.007843137254901933 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 3, 00:27:08.923 "state": "FREE", 00:27:08.923 "validity": 0.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 4, 00:27:08.923 "state": "FREE", 00:27:08.923 "validity": 0.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 5, 00:27:08.923 "state": "FREE", 00:27:08.923 "validity": 0.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 6, 00:27:08.923 "state": "FREE", 00:27:08.923 "validity": 0.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 7, 00:27:08.923 "state": "FREE", 00:27:08.923 "validity": 0.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 8, 00:27:08.923 "state": "FREE", 00:27:08.923 "validity": 0.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 9, 00:27:08.923 "state": "FREE", 00:27:08.923 "validity": 0.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 10, 00:27:08.923 "state": "FREE", 00:27:08.923 "validity": 0.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 11, 00:27:08.923 "state": "FREE", 00:27:08.923 "validity": 0.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.923 "id": 12, 00:27:08.923 "state": "FREE", 00:27:08.923 "validity": 0.0 00:27:08.923 }, 00:27:08.923 { 00:27:08.924 "id": 13, 00:27:08.924 "state": "FREE", 00:27:08.924 "validity": 0.0 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "id": 14, 00:27:08.924 "state": "FREE", 00:27:08.924 "validity": 0.0 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "id": 15, 00:27:08.924 "state": "FREE", 00:27:08.924 "validity": 0.0 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "id": 16, 00:27:08.924 "state": "FREE", 00:27:08.924 "validity": 0.0 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "id": 17, 00:27:08.924 "state": "FREE", 00:27:08.924 "validity": 0.0 00:27:08.924 } 00:27:08.924 ], 00:27:08.924 "read-only": true 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "name": "cache_device", 00:27:08.924 "type": "bdev", 00:27:08.924 "chunks": [ 00:27:08.924 { 00:27:08.924 "id": 0, 00:27:08.924 "state": "INACTIVE", 00:27:08.924 "utilization": 0.0 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "id": 1, 00:27:08.924 "state": "OPEN", 00:27:08.924 "utilization": 0.0 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "id": 2, 00:27:08.924 "state": "OPEN", 00:27:08.924 "utilization": 0.0 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "id": 3, 00:27:08.924 "state": "FREE", 00:27:08.924 "utilization": 0.0 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "id": 4, 00:27:08.924 "state": "FREE", 00:27:08.924 "utilization": 0.0 00:27:08.924 } 00:27:08.924 ], 00:27:08.924 "read-only": true 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "name": "verbose_mode", 00:27:08.924 "value": true, 00:27:08.924 "unit": "", 00:27:08.924 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:08.924 }, 00:27:08.924 { 00:27:08.924 "name": "prep_upgrade_on_shutdown", 00:27:08.924 "value": false, 00:27:08.924 "unit": "", 00:27:08.924 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:08.924 } 00:27:08.924 ] 00:27:08.924 } 00:27:08.924 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:08.924 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:08.924 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:08.924 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:08.924 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:08.924 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:08.924 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:08.924 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:09.186 Validate MD5 checksum, iteration 1 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:09.186 10:47:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:09.186 [2024-09-28 10:47:43.892795] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:27:09.186 [2024-09-28 10:47:43.892901] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93718 ] 00:27:09.447 [2024-09-28 10:47:44.017847] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:09.447 [2024-09-28 10:47:44.038846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.447 [2024-09-28 10:47:44.102038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.351  Copying: 531/1024 [MB] (531 MBps) Copying: 1024/1024 [MB] (average 563 MBps) 00:27:12.351 00:27:12.351 10:47:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:12.351 10:47:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:14.262 Validate MD5 checksum, iteration 2 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=452b44314752377756ede2bb4d28edcd 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 452b44314752377756ede2bb4d28edcd != \4\5\2\b\4\4\3\1\4\7\5\2\3\7\7\7\5\6\e\d\e\2\b\b\4\d\2\8\e\d\c\d ]] 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:14.262 10:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:14.262 [2024-09-28 10:47:48.651561] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:27:14.262 [2024-09-28 10:47:48.651674] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93768 ] 00:27:14.262 [2024-09-28 10:47:48.776092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:14.262 [2024-09-28 10:47:48.794661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.262 [2024-09-28 10:47:48.836490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.792  Copying: 636/1024 [MB] (636 MBps) Copying: 1024/1024 [MB] (average 617 MBps) 00:27:16.792 00:27:16.792 10:47:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:16.792 10:47:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1e48099ee2bd96e6ad3e83b44a336897 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1e48099ee2bd96e6ad3e83b44a336897 != \1\e\4\8\0\9\9\e\e\2\b\d\9\6\e\6\a\d\3\e\8\3\b\4\4\a\3\3\6\8\9\7 ]] 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 93648 ]] 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 93648 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93824 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93824 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93824 ']' 00:27:18.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:18.698 10:47:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:18.698 [2024-09-28 10:47:53.060303] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:27:18.698 [2024-09-28 10:47:53.060583] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93824 ] 00:27:18.698 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 93648 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:18.698 [2024-09-28 10:47:53.189715] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:18.698 [2024-09-28 10:47:53.208137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.698 [2024-09-28 10:47:53.243932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.960 [2024-09-28 10:47:53.496389] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:18.960 [2024-09-28 10:47:53.496448] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:18.960 [2024-09-28 10:47:53.634746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.634784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:18.960 [2024-09-28 10:47:53.634801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:18.960 [2024-09-28 10:47:53.634809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.634860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.634870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:18.960 [2024-09-28 10:47:53.634878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:18.960 [2024-09-28 10:47:53.634888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.634911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:18.960 [2024-09-28 10:47:53.635160] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:18.960 [2024-09-28 10:47:53.635174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.635184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:18.960 [2024-09-28 10:47:53.635192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:27:18.960 [2024-09-28 10:47:53.635200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.635435] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:18.960 [2024-09-28 10:47:53.641133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.641198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:18.960 [2024-09-28 10:47:53.641219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.695 ms 00:27:18.960 [2024-09-28 10:47:53.641242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.642669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.642719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:18.960 [2024-09-28 10:47:53.642737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:18.960 [2024-09-28 10:47:53.642750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.643270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.643295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:18.960 [2024-09-28 10:47:53.643310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:27:18.960 [2024-09-28 10:47:53.643324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.643383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.643399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:18.960 [2024-09-28 10:47:53.643413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:18.960 [2024-09-28 10:47:53.643426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.643501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.643519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:18.960 [2024-09-28 10:47:53.643533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:18.960 [2024-09-28 10:47:53.643557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.643600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:18.960 [2024-09-28 10:47:53.645361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.645542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:18.960 [2024-09-28 10:47:53.645995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.770 ms 00:27:18.960 [2024-09-28 10:47:53.646077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.646200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.646734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:18.960 [2024-09-28 10:47:53.646828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:18.960 [2024-09-28 10:47:53.646880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.646990] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:18.960 [2024-09-28 10:47:53.647039] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:18.960 [2024-09-28 10:47:53.647102] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:18.960 [2024-09-28 10:47:53.647135] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:18.960 [2024-09-28 10:47:53.647338] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:18.960 [2024-09-28 10:47:53.647360] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:18.960 [2024-09-28 10:47:53.647379] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:18.960 [2024-09-28 10:47:53.647397] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:18.960 [2024-09-28 10:47:53.647413] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:18.960 [2024-09-28 10:47:53.647427] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:18.960 [2024-09-28 10:47:53.647440] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:18.960 [2024-09-28 10:47:53.647453] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:18.960 [2024-09-28 10:47:53.647465] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:18.960 [2024-09-28 10:47:53.647481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.647495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:18.960 [2024-09-28 10:47:53.647510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.494 ms 00:27:18.960 [2024-09-28 10:47:53.647533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.647727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.960 [2024-09-28 10:47:53.647754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:18.960 [2024-09-28 10:47:53.647778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.137 ms 00:27:18.960 [2024-09-28 10:47:53.647794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.960 [2024-09-28 10:47:53.648043] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:18.960 [2024-09-28 10:47:53.648067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:18.960 [2024-09-28 10:47:53.648084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:18.960 [2024-09-28 10:47:53.648101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.960 [2024-09-28 10:47:53.648126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:18.960 [2024-09-28 10:47:53.648141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:18.960 [2024-09-28 10:47:53.648156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:18.960 [2024-09-28 10:47:53.648171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:18.960 [2024-09-28 10:47:53.648185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:18.960 [2024-09-28 10:47:53.648199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.960 [2024-09-28 10:47:53.648214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:18.960 [2024-09-28 10:47:53.648229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:18.961 [2024-09-28 10:47:53.648243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.961 [2024-09-28 10:47:53.648257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:18.961 [2024-09-28 10:47:53.648278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:18.961 [2024-09-28 10:47:53.648294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.961 [2024-09-28 10:47:53.648310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:18.961 [2024-09-28 10:47:53.648332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:18.961 [2024-09-28 10:47:53.648346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.961 [2024-09-28 10:47:53.648362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:18.961 [2024-09-28 10:47:53.648376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:18.961 [2024-09-28 10:47:53.648391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:18.961 [2024-09-28 10:47:53.648405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:18.961 [2024-09-28 10:47:53.648419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:18.961 [2024-09-28 10:47:53.648433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:18.961 [2024-09-28 10:47:53.648447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:18.961 [2024-09-28 10:47:53.648461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:18.961 [2024-09-28 10:47:53.648475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:18.961 [2024-09-28 10:47:53.648494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:18.961 [2024-09-28 10:47:53.648508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:18.961 [2024-09-28 10:47:53.648526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:18.961 [2024-09-28 10:47:53.648541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:18.961 [2024-09-28 10:47:53.648555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:18.961 [2024-09-28 10:47:53.648569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.961 [2024-09-28 10:47:53.648583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:18.961 [2024-09-28 10:47:53.648597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:18.961 [2024-09-28 10:47:53.648611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.961 [2024-09-28 10:47:53.648625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:18.961 [2024-09-28 10:47:53.648643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:18.961 [2024-09-28 10:47:53.648657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.961 [2024-09-28 10:47:53.648671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:18.961 [2024-09-28 10:47:53.648685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:18.961 [2024-09-28 10:47:53.648699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.961 [2024-09-28 10:47:53.648714] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:18.961 [2024-09-28 10:47:53.648729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:18.961 [2024-09-28 10:47:53.648744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:18.961 [2024-09-28 10:47:53.648762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.961 [2024-09-28 10:47:53.648778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:18.961 [2024-09-28 10:47:53.648793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:18.961 [2024-09-28 10:47:53.648807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:18.961 [2024-09-28 10:47:53.648822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:18.961 [2024-09-28 10:47:53.648837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:18.961 [2024-09-28 10:47:53.648851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:18.961 [2024-09-28 10:47:53.648868] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:18.961 [2024-09-28 10:47:53.648892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.648910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:18.961 [2024-09-28 10:47:53.648926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.648942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.648957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:18.961 [2024-09-28 10:47:53.649324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:18.961 [2024-09-28 10:47:53.649395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:18.961 [2024-09-28 10:47:53.649459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:18.961 [2024-09-28 10:47:53.649583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.649653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.649716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.650069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.650137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.650201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.650262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:18.961 [2024-09-28 10:47:53.650316] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:18.961 [2024-09-28 10:47:53.650373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.650446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:18.961 [2024-09-28 10:47:53.650538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:18.961 [2024-09-28 10:47:53.650567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:18.961 [2024-09-28 10:47:53.650651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:18.961 [2024-09-28 10:47:53.650699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.961 [2024-09-28 10:47:53.650728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:18.961 [2024-09-28 10:47:53.650747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.819 ms 00:27:18.961 [2024-09-28 10:47:53.650838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.961 [2024-09-28 10:47:53.657574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.961 [2024-09-28 10:47:53.657680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:18.961 [2024-09-28 10:47:53.657728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.653 ms 00:27:18.961 [2024-09-28 10:47:53.657749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.961 [2024-09-28 10:47:53.657798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.961 [2024-09-28 10:47:53.657819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:18.961 [2024-09-28 10:47:53.657838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:18.961 [2024-09-28 10:47:53.657856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.961 [2024-09-28 10:47:53.674466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.961 [2024-09-28 10:47:53.674648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:18.961 [2024-09-28 10:47:53.674736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.540 ms 00:27:18.961 [2024-09-28 10:47:53.674769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.961 [2024-09-28 10:47:53.674853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.961 [2024-09-28 10:47:53.674894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:18.961 [2024-09-28 10:47:53.674924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:18.961 [2024-09-28 10:47:53.674955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.961 [2024-09-28 10:47:53.675200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.961 [2024-09-28 10:47:53.675253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:18.961 [2024-09-28 10:47:53.675288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:27:18.961 [2024-09-28 10:47:53.675316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.961 [2024-09-28 10:47:53.675482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.961 [2024-09-28 10:47:53.675497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:18.961 [2024-09-28 10:47:53.675509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:18.961 [2024-09-28 10:47:53.675519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.961 [2024-09-28 10:47:53.682171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.962 [2024-09-28 10:47:53.682213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:18.962 [2024-09-28 10:47:53.682226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.620 ms 00:27:18.962 [2024-09-28 10:47:53.682237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.962 [2024-09-28 10:47:53.682363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.962 [2024-09-28 10:47:53.682378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:18.962 [2024-09-28 10:47:53.682389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:18.962 [2024-09-28 10:47:53.682400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.962 [2024-09-28 10:47:53.687370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.962 [2024-09-28 10:47:53.687405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:18.962 [2024-09-28 10:47:53.687417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.925 ms 00:27:18.962 [2024-09-28 10:47:53.687424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.962 [2024-09-28 10:47:53.689677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.962 [2024-09-28 10:47:53.689710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:18.962 [2024-09-28 10:47:53.689726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.598 ms 00:27:18.962 [2024-09-28 10:47:53.689734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.962 [2024-09-28 10:47:53.704369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.962 [2024-09-28 10:47:53.704411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:18.962 [2024-09-28 10:47:53.704427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.600 ms 00:27:18.962 [2024-09-28 10:47:53.704435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.962 [2024-09-28 10:47:53.704550] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:18.962 [2024-09-28 10:47:53.704634] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:18.962 [2024-09-28 10:47:53.704711] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:18.962 [2024-09-28 10:47:53.704787] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:18.962 [2024-09-28 10:47:53.704796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.962 [2024-09-28 10:47:53.704805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:18.962 [2024-09-28 10:47:53.704813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:27:18.962 [2024-09-28 10:47:53.704821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.962 [2024-09-28 10:47:53.704858] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:18.962 [2024-09-28 10:47:53.704868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.962 [2024-09-28 10:47:53.704875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:18.962 [2024-09-28 10:47:53.704883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:18.962 [2024-09-28 10:47:53.704890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.962 [2024-09-28 10:47:53.708590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.962 [2024-09-28 10:47:53.708629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:18.962 [2024-09-28 10:47:53.708640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.679 ms 00:27:18.962 [2024-09-28 10:47:53.708654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.962 [2024-09-28 10:47:53.709274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.962 [2024-09-28 10:47:53.709301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:18.962 [2024-09-28 10:47:53.709312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:18.962 [2024-09-28 10:47:53.709321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.962 [2024-09-28 10:47:53.709392] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:18.962 [2024-09-28 10:47:53.709539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.962 [2024-09-28 10:47:53.709549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:18.962 [2024-09-28 10:47:53.709560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.148 ms 00:27:18.962 [2024-09-28 10:47:53.709575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.905 [2024-09-28 10:47:54.423816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.905 [2024-09-28 10:47:54.423906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:19.905 [2024-09-28 10:47:54.423925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 713.957 ms 00:27:19.905 [2024-09-28 10:47:54.423935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.905 [2024-09-28 10:47:54.425573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.905 [2024-09-28 10:47:54.425617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:19.905 [2024-09-28 10:47:54.425630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.039 ms 00:27:19.905 [2024-09-28 10:47:54.425648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.905 [2024-09-28 10:47:54.426271] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:19.905 [2024-09-28 10:47:54.426314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.905 [2024-09-28 10:47:54.426324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:19.905 [2024-09-28 10:47:54.426335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.626 ms 00:27:19.905 [2024-09-28 10:47:54.426356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.905 [2024-09-28 10:47:54.426397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.905 [2024-09-28 10:47:54.426419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:19.905 [2024-09-28 10:47:54.426429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:19.905 [2024-09-28 10:47:54.426441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.905 [2024-09-28 10:47:54.426478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 717.086 ms, result 0 00:27:19.905 [2024-09-28 10:47:54.426522] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:19.905 [2024-09-28 10:47:54.426631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.905 [2024-09-28 10:47:54.426643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:19.905 [2024-09-28 10:47:54.426663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.110 ms 00:27:19.905 [2024-09-28 10:47:54.426671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.043197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.043250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:20.478 [2024-09-28 10:47:55.043263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 615.946 ms 00:27:20.478 [2024-09-28 10:47:55.043271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.044270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.044305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:20.478 [2024-09-28 10:47:55.044316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.653 ms 00:27:20.478 [2024-09-28 10:47:55.044324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.044802] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:20.478 [2024-09-28 10:47:55.044830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.044839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:20.478 [2024-09-28 10:47:55.044848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.473 ms 00:27:20.478 [2024-09-28 10:47:55.044856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.044882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.044891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:20.478 [2024-09-28 10:47:55.044899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:20.478 [2024-09-28 10:47:55.044906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.044942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 618.418 ms, result 0 00:27:20.478 [2024-09-28 10:47:55.045003] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:20.478 [2024-09-28 10:47:55.045016] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:20.478 [2024-09-28 10:47:55.045026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.045035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:20.478 [2024-09-28 10:47:55.045044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1335.652 ms 00:27:20.478 [2024-09-28 10:47:55.045051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.045087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.045095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:20.478 [2024-09-28 10:47:55.045103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:20.478 [2024-09-28 10:47:55.045111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.053168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:20.478 [2024-09-28 10:47:55.053279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.053290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:20.478 [2024-09-28 10:47:55.053300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.153 ms 00:27:20.478 [2024-09-28 10:47:55.053310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.054000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.054017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:20.478 [2024-09-28 10:47:55.054026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.619 ms 00:27:20.478 [2024-09-28 10:47:55.054033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.056260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.056278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:20.478 [2024-09-28 10:47:55.056291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.210 ms 00:27:20.478 [2024-09-28 10:47:55.056299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.056336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.056344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:20.478 [2024-09-28 10:47:55.056352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:20.478 [2024-09-28 10:47:55.056359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.056464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.056474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:20.478 [2024-09-28 10:47:55.056482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:20.478 [2024-09-28 10:47:55.056489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.056511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.056519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:20.478 [2024-09-28 10:47:55.056526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:20.478 [2024-09-28 10:47:55.056533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.056563] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:20.478 [2024-09-28 10:47:55.056573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.056580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:20.478 [2024-09-28 10:47:55.056588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:20.478 [2024-09-28 10:47:55.056595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.056649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.478 [2024-09-28 10:47:55.056658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:20.478 [2024-09-28 10:47:55.056670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:20.478 [2024-09-28 10:47:55.056677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.478 [2024-09-28 10:47:55.057595] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1422.449 ms, result 0 00:27:20.478 [2024-09-28 10:47:55.070196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.478 [2024-09-28 10:47:55.086154] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:20.478 [2024-09-28 10:47:55.094285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:21.047 Validate MD5 checksum, iteration 1 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:21.047 10:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:21.047 [2024-09-28 10:47:55.607232] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:27:21.047 [2024-09-28 10:47:55.607327] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93854 ] 00:27:21.047 [2024-09-28 10:47:55.731351] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:21.047 [2024-09-28 10:47:55.751998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.047 [2024-09-28 10:47:55.792151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.379  Copying: 654/1024 [MB] (654 MBps) Copying: 1024/1024 [MB] (average 643 MBps) 00:27:24.379 00:27:24.379 10:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:24.379 10:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=452b44314752377756ede2bb4d28edcd 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 452b44314752377756ede2bb4d28edcd != \4\5\2\b\4\4\3\1\4\7\5\2\3\7\7\7\5\6\e\d\e\2\b\b\4\d\2\8\e\d\c\d ]] 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:26.279 Validate MD5 checksum, iteration 2 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:26.279 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:26.280 10:48:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:26.280 [2024-09-28 10:48:00.744832] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:27:26.280 [2024-09-28 10:48:00.744973] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93910 ] 00:27:26.280 [2024-09-28 10:48:00.875037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:26.280 [2024-09-28 10:48:00.892916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.280 [2024-09-28 10:48:00.935426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.863  Copying: 583/1024 [MB] (583 MBps) Copying: 1024/1024 [MB] (average 557 MBps) 00:27:28.863 00:27:28.863 10:48:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:28.863 10:48:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1e48099ee2bd96e6ad3e83b44a336897 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1e48099ee2bd96e6ad3e83b44a336897 != \1\e\4\8\0\9\9\e\e\2\b\d\9\6\e\6\a\d\3\e\8\3\b\4\4\a\3\3\6\8\9\7 ]] 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93824 ]] 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93824 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93824 ']' 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93824 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93824 00:27:31.405 killing process with pid 93824 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93824' 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93824 00:27:31.405 10:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93824 00:27:31.405 [2024-09-28 10:48:06.004586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:31.405 [2024-09-28 10:48:06.011393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.011444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:31.405 [2024-09-28 10:48:06.011459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:31.405 [2024-09-28 10:48:06.011467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.011492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:31.405 [2024-09-28 10:48:06.012177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.012213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:31.405 [2024-09-28 10:48:06.012224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.671 ms 00:27:31.405 [2024-09-28 10:48:06.012238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.012497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.012510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:31.405 [2024-09-28 10:48:06.012519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.234 ms 00:27:31.405 [2024-09-28 10:48:06.012528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.014143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.014174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:31.405 [2024-09-28 10:48:06.014184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.598 ms 00:27:31.405 [2024-09-28 10:48:06.014192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.015455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.015490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:31.405 [2024-09-28 10:48:06.015500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.229 ms 00:27:31.405 [2024-09-28 10:48:06.015508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.018237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.018278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:31.405 [2024-09-28 10:48:06.018289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.671 ms 00:27:31.405 [2024-09-28 10:48:06.018297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.020150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.020192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:31.405 [2024-09-28 10:48:06.020211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.803 ms 00:27:31.405 [2024-09-28 10:48:06.020222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.020322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.020334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:31.405 [2024-09-28 10:48:06.020344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:27:31.405 [2024-09-28 10:48:06.020353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.022079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.022117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:31.405 [2024-09-28 10:48:06.022127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.708 ms 00:27:31.405 [2024-09-28 10:48:06.022136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.023726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.023764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:31.405 [2024-09-28 10:48:06.023773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.550 ms 00:27:31.405 [2024-09-28 10:48:06.023781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.025263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.025298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:31.405 [2024-09-28 10:48:06.025307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.444 ms 00:27:31.405 [2024-09-28 10:48:06.025315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.026650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.405 [2024-09-28 10:48:06.026688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:31.405 [2024-09-28 10:48:06.026697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.264 ms 00:27:31.405 [2024-09-28 10:48:06.026705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.405 [2024-09-28 10:48:06.026741] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:31.405 [2024-09-28 10:48:06.026761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:31.405 [2024-09-28 10:48:06.026772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:31.405 [2024-09-28 10:48:06.026781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:31.405 [2024-09-28 10:48:06.026790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:31.405 [2024-09-28 10:48:06.026798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:31.405 [2024-09-28 10:48:06.026806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:31.405 [2024-09-28 10:48:06.026817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:31.405 [2024-09-28 10:48:06.026825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:31.405 [2024-09-28 10:48:06.026833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:31.405 [2024-09-28 10:48:06.026842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:31.405 [2024-09-28 10:48:06.026850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:31.405 [2024-09-28 10:48:06.026858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:31.405 [2024-09-28 10:48:06.026867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:31.405 [2024-09-28 10:48:06.026874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:31.406 [2024-09-28 10:48:06.026882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:31.406 [2024-09-28 10:48:06.026890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:31.406 [2024-09-28 10:48:06.026898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:31.406 [2024-09-28 10:48:06.026905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:31.406 [2024-09-28 10:48:06.026914] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:31.406 [2024-09-28 10:48:06.026922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 129af541-3381-49a7-a48a-35d23a66a43f 00:27:31.406 [2024-09-28 10:48:06.026930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:31.406 [2024-09-28 10:48:06.026937] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:31.406 [2024-09-28 10:48:06.026944] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:31.406 [2024-09-28 10:48:06.026952] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:31.406 [2024-09-28 10:48:06.026976] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:31.406 [2024-09-28 10:48:06.026986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:31.406 [2024-09-28 10:48:06.026995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:31.406 [2024-09-28 10:48:06.027002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:31.406 [2024-09-28 10:48:06.027010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:31.406 [2024-09-28 10:48:06.027018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.406 [2024-09-28 10:48:06.027028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:31.406 [2024-09-28 10:48:06.027039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:27:31.406 [2024-09-28 10:48:06.027048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.029299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.406 [2024-09-28 10:48:06.029334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:31.406 [2024-09-28 10:48:06.029343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.232 ms 00:27:31.406 [2024-09-28 10:48:06.029352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.029468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.406 [2024-09-28 10:48:06.029483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:31.406 [2024-09-28 10:48:06.029492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.094 ms 00:27:31.406 [2024-09-28 10:48:06.029500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.037877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.037917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:31.406 [2024-09-28 10:48:06.037927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.037935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.038022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.038038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:31.406 [2024-09-28 10:48:06.038047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.038056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.038140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.038152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:31.406 [2024-09-28 10:48:06.038161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.038169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.038188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.038198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:31.406 [2024-09-28 10:48:06.038209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.038217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.053372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.053424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:31.406 [2024-09-28 10:48:06.053435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.053445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.065156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.065207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:31.406 [2024-09-28 10:48:06.065224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.065231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.065298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.065307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:31.406 [2024-09-28 10:48:06.065314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.065325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.065368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.065378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:31.406 [2024-09-28 10:48:06.065388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.065397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.065462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.065473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:31.406 [2024-09-28 10:48:06.065480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.065487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.065516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.065528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:31.406 [2024-09-28 10:48:06.065536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.065542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.065584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.065592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:31.406 [2024-09-28 10:48:06.065599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.065607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.065651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.406 [2024-09-28 10:48:06.065675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:31.406 [2024-09-28 10:48:06.065683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.406 [2024-09-28 10:48:06.065694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.406 [2024-09-28 10:48:06.065828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 54.397 ms, result 0 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:31.667 Remove shared memory files 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid93648 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:31.667 00:27:31.667 real 1m11.002s 00:27:31.667 user 1m34.585s 00:27:31.667 sys 0m20.146s 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:31.667 ************************************ 00:27:31.667 10:48:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:31.667 END TEST ftl_upgrade_shutdown 00:27:31.667 ************************************ 00:27:31.667 10:48:06 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:27:31.668 10:48:06 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:31.668 10:48:06 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:27:31.668 10:48:06 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.668 10:48:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:31.668 ************************************ 00:27:31.668 START TEST ftl_restore_fast 00:27:31.668 ************************************ 00:27:31.668 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:31.668 * Looking for test storage... 00:27:31.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:31.668 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:31.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.929 --rc genhtml_branch_coverage=1 00:27:31.929 --rc genhtml_function_coverage=1 00:27:31.929 --rc genhtml_legend=1 00:27:31.929 --rc geninfo_all_blocks=1 00:27:31.929 --rc geninfo_unexecuted_blocks=1 00:27:31.929 00:27:31.929 ' 00:27:31.929 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:31.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.929 --rc genhtml_branch_coverage=1 00:27:31.929 --rc genhtml_function_coverage=1 00:27:31.930 --rc genhtml_legend=1 00:27:31.930 --rc geninfo_all_blocks=1 00:27:31.930 --rc geninfo_unexecuted_blocks=1 00:27:31.930 00:27:31.930 ' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:31.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.930 --rc genhtml_branch_coverage=1 00:27:31.930 --rc genhtml_function_coverage=1 00:27:31.930 --rc genhtml_legend=1 00:27:31.930 --rc geninfo_all_blocks=1 00:27:31.930 --rc geninfo_unexecuted_blocks=1 00:27:31.930 00:27:31.930 ' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:31.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.930 --rc genhtml_branch_coverage=1 00:27:31.930 --rc genhtml_function_coverage=1 00:27:31.930 --rc genhtml_legend=1 00:27:31.930 --rc geninfo_all_blocks=1 00:27:31.930 --rc geninfo_unexecuted_blocks=1 00:27:31.930 00:27:31.930 ' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.ayRy4bYzH6 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94056 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94056 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 94056 ']' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:31.930 10:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:31.930 [2024-09-28 10:48:06.604615] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:27:31.930 [2024-09-28 10:48:06.604729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94056 ] 00:27:32.191 [2024-09-28 10:48:06.733521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:32.191 [2024-09-28 10:48:06.754941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.191 [2024-09-28 10:48:06.788697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.763 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:32.763 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:27:32.763 10:48:07 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:32.763 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:27:32.763 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:32.763 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:27:32.763 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:27:32.763 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:33.024 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:33.024 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:27:33.024 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:33.024 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:33.024 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:33.024 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:33.024 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:33.024 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:33.285 { 00:27:33.285 "name": "nvme0n1", 00:27:33.285 "aliases": [ 00:27:33.285 "e5f3b3f5-c4a6-4d13-92a5-6dcee7d66f34" 00:27:33.285 ], 00:27:33.285 "product_name": "NVMe disk", 00:27:33.285 "block_size": 4096, 00:27:33.285 "num_blocks": 1310720, 00:27:33.285 "uuid": "e5f3b3f5-c4a6-4d13-92a5-6dcee7d66f34", 00:27:33.285 "numa_id": -1, 00:27:33.285 "assigned_rate_limits": { 00:27:33.285 "rw_ios_per_sec": 0, 00:27:33.285 "rw_mbytes_per_sec": 0, 00:27:33.285 "r_mbytes_per_sec": 0, 00:27:33.285 "w_mbytes_per_sec": 0 00:27:33.285 }, 00:27:33.285 "claimed": true, 00:27:33.285 "claim_type": "read_many_write_one", 00:27:33.285 "zoned": false, 00:27:33.285 "supported_io_types": { 00:27:33.285 "read": true, 00:27:33.285 "write": true, 00:27:33.285 "unmap": true, 00:27:33.285 "flush": true, 00:27:33.285 "reset": true, 00:27:33.285 "nvme_admin": true, 00:27:33.285 "nvme_io": true, 00:27:33.285 "nvme_io_md": false, 00:27:33.285 "write_zeroes": true, 00:27:33.285 "zcopy": false, 00:27:33.285 "get_zone_info": false, 00:27:33.285 "zone_management": false, 00:27:33.285 "zone_append": false, 00:27:33.285 "compare": true, 00:27:33.285 "compare_and_write": false, 00:27:33.285 "abort": true, 00:27:33.285 "seek_hole": false, 00:27:33.285 "seek_data": false, 00:27:33.285 "copy": true, 00:27:33.285 "nvme_iov_md": false 00:27:33.285 }, 00:27:33.285 "driver_specific": { 00:27:33.285 "nvme": [ 00:27:33.285 { 00:27:33.285 "pci_address": "0000:00:11.0", 00:27:33.285 "trid": { 00:27:33.285 "trtype": "PCIe", 00:27:33.285 "traddr": "0000:00:11.0" 00:27:33.285 }, 00:27:33.285 "ctrlr_data": { 00:27:33.285 "cntlid": 0, 00:27:33.285 "vendor_id": "0x1b36", 00:27:33.285 "model_number": "QEMU NVMe Ctrl", 00:27:33.285 "serial_number": "12341", 00:27:33.285 "firmware_revision": "8.0.0", 00:27:33.285 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:33.285 "oacs": { 00:27:33.285 "security": 0, 00:27:33.285 "format": 1, 00:27:33.285 "firmware": 0, 00:27:33.285 "ns_manage": 1 00:27:33.285 }, 00:27:33.285 "multi_ctrlr": false, 00:27:33.285 "ana_reporting": false 00:27:33.285 }, 00:27:33.285 "vs": { 00:27:33.285 "nvme_version": "1.4" 00:27:33.285 }, 00:27:33.285 "ns_data": { 00:27:33.285 "id": 1, 00:27:33.285 "can_share": false 00:27:33.285 } 00:27:33.285 } 00:27:33.285 ], 00:27:33.285 "mp_policy": "active_passive" 00:27:33.285 } 00:27:33.285 } 00:27:33.285 ]' 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:33.285 10:48:07 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:33.547 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=0f0f172f-9f11-401b-8404-80c3aaa94c5f 00:27:33.547 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:27:33.547 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f0f172f-9f11-401b-8404-80c3aaa94c5f 00:27:33.808 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=2b2f5faa-1a6a-4110-a9e4-758cc14f6420 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2b2f5faa-1a6a-4110-a9e4-758cc14f6420 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:34.068 10:48:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:34.327 10:48:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:34.327 { 00:27:34.327 "name": "98f1cfbd-5b7d-448a-9280-43db2ad78c5b", 00:27:34.327 "aliases": [ 00:27:34.327 "lvs/nvme0n1p0" 00:27:34.327 ], 00:27:34.327 "product_name": "Logical Volume", 00:27:34.327 "block_size": 4096, 00:27:34.327 "num_blocks": 26476544, 00:27:34.327 "uuid": "98f1cfbd-5b7d-448a-9280-43db2ad78c5b", 00:27:34.327 "assigned_rate_limits": { 00:27:34.327 "rw_ios_per_sec": 0, 00:27:34.327 "rw_mbytes_per_sec": 0, 00:27:34.327 "r_mbytes_per_sec": 0, 00:27:34.327 "w_mbytes_per_sec": 0 00:27:34.327 }, 00:27:34.327 "claimed": false, 00:27:34.327 "zoned": false, 00:27:34.327 "supported_io_types": { 00:27:34.327 "read": true, 00:27:34.327 "write": true, 00:27:34.327 "unmap": true, 00:27:34.327 "flush": false, 00:27:34.327 "reset": true, 00:27:34.327 "nvme_admin": false, 00:27:34.327 "nvme_io": false, 00:27:34.327 "nvme_io_md": false, 00:27:34.327 "write_zeroes": true, 00:27:34.327 "zcopy": false, 00:27:34.327 "get_zone_info": false, 00:27:34.327 "zone_management": false, 00:27:34.327 "zone_append": false, 00:27:34.327 "compare": false, 00:27:34.327 "compare_and_write": false, 00:27:34.327 "abort": false, 00:27:34.327 "seek_hole": true, 00:27:34.327 "seek_data": true, 00:27:34.327 "copy": false, 00:27:34.327 "nvme_iov_md": false 00:27:34.327 }, 00:27:34.327 "driver_specific": { 00:27:34.327 "lvol": { 00:27:34.327 "lvol_store_uuid": "2b2f5faa-1a6a-4110-a9e4-758cc14f6420", 00:27:34.327 "base_bdev": "nvme0n1", 00:27:34.327 "thin_provision": true, 00:27:34.327 "num_allocated_clusters": 0, 00:27:34.328 "snapshot": false, 00:27:34.328 "clone": false, 00:27:34.328 "esnap_clone": false 00:27:34.328 } 00:27:34.328 } 00:27:34.328 } 00:27:34.328 ]' 00:27:34.328 10:48:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:34.328 10:48:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:34.328 10:48:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:34.328 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:34.328 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:34.328 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:34.328 10:48:09 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:27:34.328 10:48:09 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:27:34.328 10:48:09 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:34.586 10:48:09 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:34.586 10:48:09 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:34.586 10:48:09 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:34.586 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:34.586 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:34.586 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:34.586 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:34.586 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:34.845 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:34.845 { 00:27:34.845 "name": "98f1cfbd-5b7d-448a-9280-43db2ad78c5b", 00:27:34.845 "aliases": [ 00:27:34.845 "lvs/nvme0n1p0" 00:27:34.845 ], 00:27:34.845 "product_name": "Logical Volume", 00:27:34.845 "block_size": 4096, 00:27:34.845 "num_blocks": 26476544, 00:27:34.845 "uuid": "98f1cfbd-5b7d-448a-9280-43db2ad78c5b", 00:27:34.845 "assigned_rate_limits": { 00:27:34.845 "rw_ios_per_sec": 0, 00:27:34.845 "rw_mbytes_per_sec": 0, 00:27:34.845 "r_mbytes_per_sec": 0, 00:27:34.845 "w_mbytes_per_sec": 0 00:27:34.845 }, 00:27:34.845 "claimed": false, 00:27:34.845 "zoned": false, 00:27:34.845 "supported_io_types": { 00:27:34.845 "read": true, 00:27:34.845 "write": true, 00:27:34.845 "unmap": true, 00:27:34.845 "flush": false, 00:27:34.845 "reset": true, 00:27:34.845 "nvme_admin": false, 00:27:34.845 "nvme_io": false, 00:27:34.845 "nvme_io_md": false, 00:27:34.845 "write_zeroes": true, 00:27:34.845 "zcopy": false, 00:27:34.845 "get_zone_info": false, 00:27:34.845 "zone_management": false, 00:27:34.845 "zone_append": false, 00:27:34.845 "compare": false, 00:27:34.845 "compare_and_write": false, 00:27:34.845 "abort": false, 00:27:34.845 "seek_hole": true, 00:27:34.845 "seek_data": true, 00:27:34.845 "copy": false, 00:27:34.845 "nvme_iov_md": false 00:27:34.845 }, 00:27:34.845 "driver_specific": { 00:27:34.845 "lvol": { 00:27:34.845 "lvol_store_uuid": "2b2f5faa-1a6a-4110-a9e4-758cc14f6420", 00:27:34.845 "base_bdev": "nvme0n1", 00:27:34.845 "thin_provision": true, 00:27:34.845 "num_allocated_clusters": 0, 00:27:34.845 "snapshot": false, 00:27:34.845 "clone": false, 00:27:34.845 "esnap_clone": false 00:27:34.845 } 00:27:34.845 } 00:27:34.845 } 00:27:34.845 ]' 00:27:34.845 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:34.845 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:34.845 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:34.845 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:34.845 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:34.845 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:34.845 10:48:09 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:27:34.845 10:48:09 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:35.104 10:48:09 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:35.104 10:48:09 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:35.104 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:35.104 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:35.104 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:35.104 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:35.104 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98f1cfbd-5b7d-448a-9280-43db2ad78c5b 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:35.363 { 00:27:35.363 "name": "98f1cfbd-5b7d-448a-9280-43db2ad78c5b", 00:27:35.363 "aliases": [ 00:27:35.363 "lvs/nvme0n1p0" 00:27:35.363 ], 00:27:35.363 "product_name": "Logical Volume", 00:27:35.363 "block_size": 4096, 00:27:35.363 "num_blocks": 26476544, 00:27:35.363 "uuid": "98f1cfbd-5b7d-448a-9280-43db2ad78c5b", 00:27:35.363 "assigned_rate_limits": { 00:27:35.363 "rw_ios_per_sec": 0, 00:27:35.363 "rw_mbytes_per_sec": 0, 00:27:35.363 "r_mbytes_per_sec": 0, 00:27:35.363 "w_mbytes_per_sec": 0 00:27:35.363 }, 00:27:35.363 "claimed": false, 00:27:35.363 "zoned": false, 00:27:35.363 "supported_io_types": { 00:27:35.363 "read": true, 00:27:35.363 "write": true, 00:27:35.363 "unmap": true, 00:27:35.363 "flush": false, 00:27:35.363 "reset": true, 00:27:35.363 "nvme_admin": false, 00:27:35.363 "nvme_io": false, 00:27:35.363 "nvme_io_md": false, 00:27:35.363 "write_zeroes": true, 00:27:35.363 "zcopy": false, 00:27:35.363 "get_zone_info": false, 00:27:35.363 "zone_management": false, 00:27:35.363 "zone_append": false, 00:27:35.363 "compare": false, 00:27:35.363 "compare_and_write": false, 00:27:35.363 "abort": false, 00:27:35.363 "seek_hole": true, 00:27:35.363 "seek_data": true, 00:27:35.363 "copy": false, 00:27:35.363 "nvme_iov_md": false 00:27:35.363 }, 00:27:35.363 "driver_specific": { 00:27:35.363 "lvol": { 00:27:35.363 "lvol_store_uuid": "2b2f5faa-1a6a-4110-a9e4-758cc14f6420", 00:27:35.363 "base_bdev": "nvme0n1", 00:27:35.363 "thin_provision": true, 00:27:35.363 "num_allocated_clusters": 0, 00:27:35.363 "snapshot": false, 00:27:35.363 "clone": false, 00:27:35.363 "esnap_clone": false 00:27:35.363 } 00:27:35.363 } 00:27:35.363 } 00:27:35.363 ]' 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 98f1cfbd-5b7d-448a-9280-43db2ad78c5b --l2p_dram_limit 10' 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:27:35.363 10:48:09 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 98f1cfbd-5b7d-448a-9280-43db2ad78c5b --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:27:35.363 [2024-09-28 10:48:10.129465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.363 [2024-09-28 10:48:10.129499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:35.363 [2024-09-28 10:48:10.129511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:35.363 [2024-09-28 10:48:10.129517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.363 [2024-09-28 10:48:10.129560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.363 [2024-09-28 10:48:10.129568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:35.363 [2024-09-28 10:48:10.129578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:35.363 [2024-09-28 10:48:10.129587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.363 [2024-09-28 10:48:10.129606] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:35.363 [2024-09-28 10:48:10.129801] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:35.363 [2024-09-28 10:48:10.129815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.363 [2024-09-28 10:48:10.129823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:35.363 [2024-09-28 10:48:10.129835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:27:35.363 [2024-09-28 10:48:10.129841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.363 [2024-09-28 10:48:10.129869] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8a691307-d087-4481-8af4-9c5dad4525aa 00:27:35.363 [2024-09-28 10:48:10.130839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.363 [2024-09-28 10:48:10.130859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:35.363 [2024-09-28 10:48:10.130867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:35.363 [2024-09-28 10:48:10.130877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.363 [2024-09-28 10:48:10.135807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.363 [2024-09-28 10:48:10.135833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:35.363 [2024-09-28 10:48:10.135841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.888 ms 00:27:35.363 [2024-09-28 10:48:10.135852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.363 [2024-09-28 10:48:10.135953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.363 [2024-09-28 10:48:10.135972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:35.363 [2024-09-28 10:48:10.135985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:35.363 [2024-09-28 10:48:10.135993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.363 [2024-09-28 10:48:10.136029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.363 [2024-09-28 10:48:10.136038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:35.363 [2024-09-28 10:48:10.136045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:35.363 [2024-09-28 10:48:10.136052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.363 [2024-09-28 10:48:10.136070] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:35.622 [2024-09-28 10:48:10.137365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.622 [2024-09-28 10:48:10.137391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:35.622 [2024-09-28 10:48:10.137401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:27:35.622 [2024-09-28 10:48:10.137407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.622 [2024-09-28 10:48:10.137433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.622 [2024-09-28 10:48:10.137440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:35.622 [2024-09-28 10:48:10.137452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:35.622 [2024-09-28 10:48:10.137458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.622 [2024-09-28 10:48:10.137472] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:35.622 [2024-09-28 10:48:10.137576] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:35.622 [2024-09-28 10:48:10.137586] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:35.622 [2024-09-28 10:48:10.137595] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:35.622 [2024-09-28 10:48:10.137604] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:35.622 [2024-09-28 10:48:10.137613] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:35.622 [2024-09-28 10:48:10.137630] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:35.622 [2024-09-28 10:48:10.137635] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:35.622 [2024-09-28 10:48:10.137643] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:35.622 [2024-09-28 10:48:10.137651] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:35.622 [2024-09-28 10:48:10.137658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.623 [2024-09-28 10:48:10.137664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:35.623 [2024-09-28 10:48:10.137671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:27:35.623 [2024-09-28 10:48:10.137677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.623 [2024-09-28 10:48:10.137743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.623 [2024-09-28 10:48:10.137749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:35.623 [2024-09-28 10:48:10.137756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:35.623 [2024-09-28 10:48:10.137762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.623 [2024-09-28 10:48:10.137837] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:35.623 [2024-09-28 10:48:10.137845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:35.623 [2024-09-28 10:48:10.137855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:35.623 [2024-09-28 10:48:10.137861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.623 [2024-09-28 10:48:10.137869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:35.623 [2024-09-28 10:48:10.137875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:35.623 [2024-09-28 10:48:10.137881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:35.623 [2024-09-28 10:48:10.137887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:35.623 [2024-09-28 10:48:10.137895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:35.623 [2024-09-28 10:48:10.137900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:35.623 [2024-09-28 10:48:10.137908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:35.623 [2024-09-28 10:48:10.137913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:35.623 [2024-09-28 10:48:10.137921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:35.623 [2024-09-28 10:48:10.137926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:35.623 [2024-09-28 10:48:10.137933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:35.623 [2024-09-28 10:48:10.137939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.623 [2024-09-28 10:48:10.137946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:35.623 [2024-09-28 10:48:10.137951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:35.623 [2024-09-28 10:48:10.137957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.623 [2024-09-28 10:48:10.137978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:35.623 [2024-09-28 10:48:10.137985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:35.623 [2024-09-28 10:48:10.137990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.623 [2024-09-28 10:48:10.137997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:35.623 [2024-09-28 10:48:10.138002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:35.623 [2024-09-28 10:48:10.138008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.623 [2024-09-28 10:48:10.138014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:35.623 [2024-09-28 10:48:10.138021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:35.623 [2024-09-28 10:48:10.138027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.623 [2024-09-28 10:48:10.138036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:35.623 [2024-09-28 10:48:10.138042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:35.623 [2024-09-28 10:48:10.138049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.623 [2024-09-28 10:48:10.138055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:35.623 [2024-09-28 10:48:10.138062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:35.623 [2024-09-28 10:48:10.138068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:35.623 [2024-09-28 10:48:10.138077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:35.623 [2024-09-28 10:48:10.138082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:35.623 [2024-09-28 10:48:10.138089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:35.623 [2024-09-28 10:48:10.138096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:35.623 [2024-09-28 10:48:10.138104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:35.623 [2024-09-28 10:48:10.138110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.623 [2024-09-28 10:48:10.138117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:35.623 [2024-09-28 10:48:10.138122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:35.623 [2024-09-28 10:48:10.138130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.623 [2024-09-28 10:48:10.138136] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:35.623 [2024-09-28 10:48:10.138145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:35.623 [2024-09-28 10:48:10.138150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:35.623 [2024-09-28 10:48:10.138157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.623 [2024-09-28 10:48:10.138163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:35.623 [2024-09-28 10:48:10.138169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:35.623 [2024-09-28 10:48:10.138174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:35.623 [2024-09-28 10:48:10.138180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:35.623 [2024-09-28 10:48:10.138186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:35.623 [2024-09-28 10:48:10.138192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:35.623 [2024-09-28 10:48:10.138200] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:35.623 [2024-09-28 10:48:10.138209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:35.623 [2024-09-28 10:48:10.138216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:35.623 [2024-09-28 10:48:10.138223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:35.623 [2024-09-28 10:48:10.138229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:35.623 [2024-09-28 10:48:10.138236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:35.623 [2024-09-28 10:48:10.138241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:35.623 [2024-09-28 10:48:10.138249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:35.623 [2024-09-28 10:48:10.138255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:35.623 [2024-09-28 10:48:10.138263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:35.623 [2024-09-28 10:48:10.138268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:35.623 [2024-09-28 10:48:10.138275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:35.623 [2024-09-28 10:48:10.138280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:35.623 [2024-09-28 10:48:10.138286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:35.623 [2024-09-28 10:48:10.138291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:35.623 [2024-09-28 10:48:10.138298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:35.623 [2024-09-28 10:48:10.138304] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:35.623 [2024-09-28 10:48:10.138312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:35.623 [2024-09-28 10:48:10.138317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:35.623 [2024-09-28 10:48:10.138324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:35.623 [2024-09-28 10:48:10.138329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:35.623 [2024-09-28 10:48:10.138341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:35.623 [2024-09-28 10:48:10.138346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.623 [2024-09-28 10:48:10.138354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:35.623 [2024-09-28 10:48:10.138359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:27:35.623 [2024-09-28 10:48:10.138366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.623 [2024-09-28 10:48:10.138404] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:35.623 [2024-09-28 10:48:10.138414] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:38.921 [2024-09-28 10:48:13.339731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.921 [2024-09-28 10:48:13.339805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:38.921 [2024-09-28 10:48:13.339825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3201.314 ms 00:27:38.921 [2024-09-28 10:48:13.339837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.921 [2024-09-28 10:48:13.353659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.921 [2024-09-28 10:48:13.353721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:38.921 [2024-09-28 10:48:13.353734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.703 ms 00:27:38.921 [2024-09-28 10:48:13.353748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.353871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.353890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:38.922 [2024-09-28 10:48:13.353899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:38.922 [2024-09-28 10:48:13.353910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.365770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.365825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:38.922 [2024-09-28 10:48:13.365842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.814 ms 00:27:38.922 [2024-09-28 10:48:13.365860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.365901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.365913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:38.922 [2024-09-28 10:48:13.365923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:38.922 [2024-09-28 10:48:13.365933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.366503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.366533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:38.922 [2024-09-28 10:48:13.366545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:27:38.922 [2024-09-28 10:48:13.366561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.366682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.366713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:38.922 [2024-09-28 10:48:13.366726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:27:38.922 [2024-09-28 10:48:13.366741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.383327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.383394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:38.922 [2024-09-28 10:48:13.383410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.561 ms 00:27:38.922 [2024-09-28 10:48:13.383424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.393737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:38.922 [2024-09-28 10:48:13.397538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.397732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:38.922 [2024-09-28 10:48:13.397760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.963 ms 00:27:38.922 [2024-09-28 10:48:13.397769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.550911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.551002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:38.922 [2024-09-28 10:48:13.551022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 153.100 ms 00:27:38.922 [2024-09-28 10:48:13.551036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.551270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.551290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:38.922 [2024-09-28 10:48:13.551302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:27:38.922 [2024-09-28 10:48:13.551311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.557233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.557282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:38.922 [2024-09-28 10:48:13.557297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.872 ms 00:27:38.922 [2024-09-28 10:48:13.557305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.562322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.562537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:38.922 [2024-09-28 10:48:13.562563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.959 ms 00:27:38.922 [2024-09-28 10:48:13.562571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.562931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.562944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:38.922 [2024-09-28 10:48:13.562983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:27:38.922 [2024-09-28 10:48:13.562994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.602946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.603011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:38.922 [2024-09-28 10:48:13.603028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.917 ms 00:27:38.922 [2024-09-28 10:48:13.603039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.610301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.610350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:38.922 [2024-09-28 10:48:13.610365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.176 ms 00:27:38.922 [2024-09-28 10:48:13.610374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.616401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.616450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:38.922 [2024-09-28 10:48:13.616464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.952 ms 00:27:38.922 [2024-09-28 10:48:13.616472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.623094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.623142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:38.922 [2024-09-28 10:48:13.623159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.568 ms 00:27:38.922 [2024-09-28 10:48:13.623167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.623221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.623231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:38.922 [2024-09-28 10:48:13.623244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:38.922 [2024-09-28 10:48:13.623257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.623334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.922 [2024-09-28 10:48:13.623347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:38.922 [2024-09-28 10:48:13.623358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:38.922 [2024-09-28 10:48:13.623366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.922 [2024-09-28 10:48:13.624637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3494.690 ms, result 0 00:27:38.922 { 00:27:38.922 "name": "ftl0", 00:27:38.922 "uuid": "8a691307-d087-4481-8af4-9c5dad4525aa" 00:27:38.922 } 00:27:38.922 10:48:13 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:38.922 10:48:13 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:39.183 10:48:13 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:27:39.183 10:48:13 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:39.446 [2024-09-28 10:48:14.058092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.058158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:39.446 [2024-09-28 10:48:14.058174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:39.446 [2024-09-28 10:48:14.058187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.058216] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:39.446 [2024-09-28 10:48:14.058993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.059031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:39.446 [2024-09-28 10:48:14.059056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:27:39.446 [2024-09-28 10:48:14.059067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.059355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.059450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:39.446 [2024-09-28 10:48:14.059465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:27:39.446 [2024-09-28 10:48:14.059476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.062756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.064095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:39.446 [2024-09-28 10:48:14.064132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:27:39.446 [2024-09-28 10:48:14.064142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.070618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.070758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:39.446 [2024-09-28 10:48:14.070828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.436 ms 00:27:39.446 [2024-09-28 10:48:14.070853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.073985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.074147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:39.446 [2024-09-28 10:48:14.074218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:27:39.446 [2024-09-28 10:48:14.074241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.080673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.080722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:39.446 [2024-09-28 10:48:14.080737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.372 ms 00:27:39.446 [2024-09-28 10:48:14.080746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.080889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.080901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:39.446 [2024-09-28 10:48:14.080914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:39.446 [2024-09-28 10:48:14.080925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.084052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.084097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:39.446 [2024-09-28 10:48:14.084109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.101 ms 00:27:39.446 [2024-09-28 10:48:14.084117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.086767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.086816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:39.446 [2024-09-28 10:48:14.086834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:27:39.446 [2024-09-28 10:48:14.086843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.089049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.089093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:39.446 [2024-09-28 10:48:14.089106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.154 ms 00:27:39.446 [2024-09-28 10:48:14.089113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.090802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.446 [2024-09-28 10:48:14.090976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:39.446 [2024-09-28 10:48:14.091000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:27:39.446 [2024-09-28 10:48:14.091009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.446 [2024-09-28 10:48:14.091049] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:39.446 [2024-09-28 10:48:14.091067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:39.446 [2024-09-28 10:48:14.091400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:39.447 [2024-09-28 10:48:14.091998] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:39.447 [2024-09-28 10:48:14.092009] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a691307-d087-4481-8af4-9c5dad4525aa 00:27:39.447 [2024-09-28 10:48:14.092018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:39.447 [2024-09-28 10:48:14.092027] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:39.447 [2024-09-28 10:48:14.092035] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:39.447 [2024-09-28 10:48:14.092046] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:39.447 [2024-09-28 10:48:14.092054] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:39.447 [2024-09-28 10:48:14.092063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:39.447 [2024-09-28 10:48:14.092071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:39.447 [2024-09-28 10:48:14.092080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:39.447 [2024-09-28 10:48:14.092086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:39.447 [2024-09-28 10:48:14.092096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.447 [2024-09-28 10:48:14.092106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:39.447 [2024-09-28 10:48:14.092117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:27:39.447 [2024-09-28 10:48:14.092126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.447 [2024-09-28 10:48:14.094473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.447 [2024-09-28 10:48:14.094627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:39.447 [2024-09-28 10:48:14.094649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:27:39.447 [2024-09-28 10:48:14.094658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.447 [2024-09-28 10:48:14.094812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.447 [2024-09-28 10:48:14.094822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:39.447 [2024-09-28 10:48:14.094834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:27:39.447 [2024-09-28 10:48:14.094843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.447 [2024-09-28 10:48:14.102923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.447 [2024-09-28 10:48:14.103088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:39.447 [2024-09-28 10:48:14.103153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.447 [2024-09-28 10:48:14.103179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.447 [2024-09-28 10:48:14.103273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.447 [2024-09-28 10:48:14.103295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:39.447 [2024-09-28 10:48:14.103317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.447 [2024-09-28 10:48:14.103336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.447 [2024-09-28 10:48:14.103435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.447 [2024-09-28 10:48:14.103461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:39.447 [2024-09-28 10:48:14.103484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.448 [2024-09-28 10:48:14.103560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.448 [2024-09-28 10:48:14.103602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.448 [2024-09-28 10:48:14.103624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:39.448 [2024-09-28 10:48:14.103646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.448 [2024-09-28 10:48:14.103851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.448 [2024-09-28 10:48:14.117790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.448 [2024-09-28 10:48:14.117985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:39.448 [2024-09-28 10:48:14.118049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.448 [2024-09-28 10:48:14.118074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.448 [2024-09-28 10:48:14.129818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.448 [2024-09-28 10:48:14.130017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:39.448 [2024-09-28 10:48:14.130081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.448 [2024-09-28 10:48:14.130105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.448 [2024-09-28 10:48:14.130204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.448 [2024-09-28 10:48:14.130232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:39.448 [2024-09-28 10:48:14.130255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.448 [2024-09-28 10:48:14.130275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.448 [2024-09-28 10:48:14.130419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.448 [2024-09-28 10:48:14.130451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:39.448 [2024-09-28 10:48:14.130478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.448 [2024-09-28 10:48:14.130498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.448 [2024-09-28 10:48:14.130603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.448 [2024-09-28 10:48:14.131222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:39.448 [2024-09-28 10:48:14.131291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.448 [2024-09-28 10:48:14.131315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.448 [2024-09-28 10:48:14.131379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.448 [2024-09-28 10:48:14.131403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:39.448 [2024-09-28 10:48:14.131433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.448 [2024-09-28 10:48:14.131453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.448 [2024-09-28 10:48:14.131514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.448 [2024-09-28 10:48:14.131540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:39.448 [2024-09-28 10:48:14.131562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.448 [2024-09-28 10:48:14.131619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.448 [2024-09-28 10:48:14.131699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.448 [2024-09-28 10:48:14.131724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:39.448 [2024-09-28 10:48:14.131749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.448 [2024-09-28 10:48:14.131769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.448 [2024-09-28 10:48:14.131932] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.799 ms, result 0 00:27:39.448 true 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94056 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 94056 ']' 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 94056 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94056 00:27:39.448 killing process with pid 94056 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94056' 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 94056 00:27:39.448 10:48:14 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 94056 00:27:44.737 10:48:18 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:27:48.987 262144+0 records in 00:27:48.987 262144+0 records out 00:27:48.987 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.50775 s, 238 MB/s 00:27:48.987 10:48:23 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:50.894 10:48:25 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:50.894 [2024-09-28 10:48:25.352695] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:27:50.894 [2024-09-28 10:48:25.353276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94270 ] 00:27:50.894 [2024-09-28 10:48:25.482077] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:50.894 [2024-09-28 10:48:25.503058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.894 [2024-09-28 10:48:25.537675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.894 [2024-09-28 10:48:25.628876] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:50.894 [2024-09-28 10:48:25.629534] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:51.157 [2024-09-28 10:48:25.785371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.157 [2024-09-28 10:48:25.785420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:51.157 [2024-09-28 10:48:25.785432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:51.157 [2024-09-28 10:48:25.785441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.157 [2024-09-28 10:48:25.785495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.157 [2024-09-28 10:48:25.785505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:51.157 [2024-09-28 10:48:25.785517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:51.157 [2024-09-28 10:48:25.785524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.157 [2024-09-28 10:48:25.785549] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:51.157 [2024-09-28 10:48:25.785785] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:51.157 [2024-09-28 10:48:25.785801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.157 [2024-09-28 10:48:25.785811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:51.157 [2024-09-28 10:48:25.785820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:27:51.157 [2024-09-28 10:48:25.785831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.157 [2024-09-28 10:48:25.787023] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:51.158 [2024-09-28 10:48:25.789797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.158 [2024-09-28 10:48:25.789833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:51.158 [2024-09-28 10:48:25.789848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:27:51.158 [2024-09-28 10:48:25.789856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.158 [2024-09-28 10:48:25.789916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.158 [2024-09-28 10:48:25.789925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:51.158 [2024-09-28 10:48:25.789936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:51.158 [2024-09-28 10:48:25.789947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.158 [2024-09-28 10:48:25.795356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.158 [2024-09-28 10:48:25.795387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:51.158 [2024-09-28 10:48:25.795402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.327 ms 00:27:51.158 [2024-09-28 10:48:25.795409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.158 [2024-09-28 10:48:25.795478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.158 [2024-09-28 10:48:25.795486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:51.158 [2024-09-28 10:48:25.795493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:51.158 [2024-09-28 10:48:25.795501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.158 [2024-09-28 10:48:25.795541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.158 [2024-09-28 10:48:25.795552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:51.158 [2024-09-28 10:48:25.795566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:51.158 [2024-09-28 10:48:25.795576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.158 [2024-09-28 10:48:25.795597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:51.158 [2024-09-28 10:48:25.797060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.158 [2024-09-28 10:48:25.797086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:51.158 [2024-09-28 10:48:25.797096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:27:51.158 [2024-09-28 10:48:25.797108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.158 [2024-09-28 10:48:25.797136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.158 [2024-09-28 10:48:25.797144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:51.158 [2024-09-28 10:48:25.797152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:51.158 [2024-09-28 10:48:25.797162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.158 [2024-09-28 10:48:25.797181] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:51.158 [2024-09-28 10:48:25.797201] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:51.158 [2024-09-28 10:48:25.797240] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:51.158 [2024-09-28 10:48:25.797256] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:51.158 [2024-09-28 10:48:25.797363] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:51.158 [2024-09-28 10:48:25.797375] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:51.158 [2024-09-28 10:48:25.797391] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:51.158 [2024-09-28 10:48:25.797401] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:51.158 [2024-09-28 10:48:25.797410] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:51.158 [2024-09-28 10:48:25.797419] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:51.158 [2024-09-28 10:48:25.797429] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:51.158 [2024-09-28 10:48:25.797437] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:51.158 [2024-09-28 10:48:25.797445] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:51.158 [2024-09-28 10:48:25.797453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.158 [2024-09-28 10:48:25.797461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:51.158 [2024-09-28 10:48:25.797468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:27:51.158 [2024-09-28 10:48:25.797476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.158 [2024-09-28 10:48:25.797562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.158 [2024-09-28 10:48:25.797573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:51.158 [2024-09-28 10:48:25.797581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:51.158 [2024-09-28 10:48:25.797589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.158 [2024-09-28 10:48:25.797684] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:51.158 [2024-09-28 10:48:25.797695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:51.158 [2024-09-28 10:48:25.797703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:51.158 [2024-09-28 10:48:25.797712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.158 [2024-09-28 10:48:25.797720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:51.158 [2024-09-28 10:48:25.797728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:51.158 [2024-09-28 10:48:25.797737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:51.158 [2024-09-28 10:48:25.797746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:51.158 [2024-09-28 10:48:25.797760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:51.158 [2024-09-28 10:48:25.797768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:51.158 [2024-09-28 10:48:25.797776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:51.158 [2024-09-28 10:48:25.797785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:51.158 [2024-09-28 10:48:25.797795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:51.158 [2024-09-28 10:48:25.797802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:51.158 [2024-09-28 10:48:25.797810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:51.158 [2024-09-28 10:48:25.797817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.158 [2024-09-28 10:48:25.797825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:51.158 [2024-09-28 10:48:25.797834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:51.158 [2024-09-28 10:48:25.797842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.158 [2024-09-28 10:48:25.797850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:51.158 [2024-09-28 10:48:25.797858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:51.158 [2024-09-28 10:48:25.797865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.158 [2024-09-28 10:48:25.797873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:51.158 [2024-09-28 10:48:25.797881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:51.158 [2024-09-28 10:48:25.797888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.158 [2024-09-28 10:48:25.797895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:51.158 [2024-09-28 10:48:25.797903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:51.158 [2024-09-28 10:48:25.797910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.158 [2024-09-28 10:48:25.798218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:51.158 [2024-09-28 10:48:25.798227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:51.158 [2024-09-28 10:48:25.798235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.158 [2024-09-28 10:48:25.798243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:51.158 [2024-09-28 10:48:25.798250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:51.158 [2024-09-28 10:48:25.798258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:51.158 [2024-09-28 10:48:25.798264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:51.158 [2024-09-28 10:48:25.798271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:51.158 [2024-09-28 10:48:25.798277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:51.158 [2024-09-28 10:48:25.798284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:51.158 [2024-09-28 10:48:25.798290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:51.158 [2024-09-28 10:48:25.798296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.158 [2024-09-28 10:48:25.798303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:51.158 [2024-09-28 10:48:25.798310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:51.158 [2024-09-28 10:48:25.798317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.158 [2024-09-28 10:48:25.798324] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:51.158 [2024-09-28 10:48:25.798335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:51.158 [2024-09-28 10:48:25.798343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:51.158 [2024-09-28 10:48:25.798351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.158 [2024-09-28 10:48:25.798358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:51.158 [2024-09-28 10:48:25.798364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:51.158 [2024-09-28 10:48:25.798383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:51.158 [2024-09-28 10:48:25.798392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:51.159 [2024-09-28 10:48:25.798399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:51.159 [2024-09-28 10:48:25.798406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:51.159 [2024-09-28 10:48:25.798414] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:51.159 [2024-09-28 10:48:25.798423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.159 [2024-09-28 10:48:25.798432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:51.159 [2024-09-28 10:48:25.798439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:51.159 [2024-09-28 10:48:25.798446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:51.159 [2024-09-28 10:48:25.798453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:51.159 [2024-09-28 10:48:25.798461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:51.159 [2024-09-28 10:48:25.798469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:51.159 [2024-09-28 10:48:25.798476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:51.159 [2024-09-28 10:48:25.798484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:51.159 [2024-09-28 10:48:25.798492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:51.159 [2024-09-28 10:48:25.798499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:51.159 [2024-09-28 10:48:25.798506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:51.159 [2024-09-28 10:48:25.798513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:51.159 [2024-09-28 10:48:25.798520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:51.159 [2024-09-28 10:48:25.798527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:51.159 [2024-09-28 10:48:25.798534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:51.159 [2024-09-28 10:48:25.798542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.159 [2024-09-28 10:48:25.798551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:51.159 [2024-09-28 10:48:25.798557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:51.159 [2024-09-28 10:48:25.798564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:51.159 [2024-09-28 10:48:25.798571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:51.159 [2024-09-28 10:48:25.798578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.798588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:51.159 [2024-09-28 10:48:25.798599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:27:51.159 [2024-09-28 10:48:25.798608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.817431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.817587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:51.159 [2024-09-28 10:48:25.817646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.773 ms 00:27:51.159 [2024-09-28 10:48:25.817670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.817775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.817799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:51.159 [2024-09-28 10:48:25.817819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:51.159 [2024-09-28 10:48:25.817872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.828007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.828153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:51.159 [2024-09-28 10:48:25.828218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.059 ms 00:27:51.159 [2024-09-28 10:48:25.828249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.828305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.828336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:51.159 [2024-09-28 10:48:25.828365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:51.159 [2024-09-28 10:48:25.828392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.828847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.828984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:51.159 [2024-09-28 10:48:25.829048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:27:51.159 [2024-09-28 10:48:25.829078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.829270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.830029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:51.159 [2024-09-28 10:48:25.830266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:27:51.159 [2024-09-28 10:48:25.830344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.838941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.839203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:51.159 [2024-09-28 10:48:25.839357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.286 ms 00:27:51.159 [2024-09-28 10:48:25.839418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.843386] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:51.159 [2024-09-28 10:48:25.843514] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:51.159 [2024-09-28 10:48:25.843573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.843593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:51.159 [2024-09-28 10:48:25.843612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.796 ms 00:27:51.159 [2024-09-28 10:48:25.843638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.858193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.858331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:51.159 [2024-09-28 10:48:25.858391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.506 ms 00:27:51.159 [2024-09-28 10:48:25.858414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.860515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.860623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:51.159 [2024-09-28 10:48:25.860670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:27:51.159 [2024-09-28 10:48:25.860692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.863310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.863448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:51.159 [2024-09-28 10:48:25.863504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.071 ms 00:27:51.159 [2024-09-28 10:48:25.863527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.864170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.864302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:51.159 [2024-09-28 10:48:25.864328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:27:51.159 [2024-09-28 10:48:25.864336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.882099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.882155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:51.159 [2024-09-28 10:48:25.882173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.740 ms 00:27:51.159 [2024-09-28 10:48:25.882181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.889713] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:51.159 [2024-09-28 10:48:25.892242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.892275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:51.159 [2024-09-28 10:48:25.892297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.020 ms 00:27:51.159 [2024-09-28 10:48:25.892306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.892397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.892408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:51.159 [2024-09-28 10:48:25.892417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:51.159 [2024-09-28 10:48:25.892425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.892494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.892505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:51.159 [2024-09-28 10:48:25.892514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:51.159 [2024-09-28 10:48:25.892524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.892544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.892552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:51.159 [2024-09-28 10:48:25.892564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:51.159 [2024-09-28 10:48:25.892571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.159 [2024-09-28 10:48:25.892604] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:51.159 [2024-09-28 10:48:25.892616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.159 [2024-09-28 10:48:25.892629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:51.159 [2024-09-28 10:48:25.892636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:51.159 [2024-09-28 10:48:25.892646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.160 [2024-09-28 10:48:25.897146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.160 [2024-09-28 10:48:25.897191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:51.160 [2024-09-28 10:48:25.897201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.479 ms 00:27:51.160 [2024-09-28 10:48:25.897209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.160 [2024-09-28 10:48:25.897281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.160 [2024-09-28 10:48:25.897291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:51.160 [2024-09-28 10:48:25.897304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:51.160 [2024-09-28 10:48:25.897314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.160 [2024-09-28 10:48:25.898456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.650 ms, result 0 00:28:56.204  Copying: 15/1024 [MB] (15 MBps) Copying: 35/1024 [MB] (19 MBps) Copying: 56/1024 [MB] (20 MBps) Copying: 78/1024 [MB] (22 MBps) Copying: 96/1024 [MB] (18 MBps) Copying: 125/1024 [MB] (29 MBps) Copying: 148/1024 [MB] (22 MBps) Copying: 163/1024 [MB] (14 MBps) Copying: 174/1024 [MB] (10 MBps) Copying: 196/1024 [MB] (22 MBps) Copying: 212/1024 [MB] (16 MBps) Copying: 225/1024 [MB] (13 MBps) Copying: 242/1024 [MB] (16 MBps) Copying: 252/1024 [MB] (10 MBps) Copying: 263/1024 [MB] (10 MBps) Copying: 274/1024 [MB] (11 MBps) Copying: 298/1024 [MB] (23 MBps) Copying: 311/1024 [MB] (13 MBps) Copying: 329152/1048576 [kB] (10208 kBps) Copying: 345/1024 [MB] (23 MBps) Copying: 357/1024 [MB] (11 MBps) Copying: 375792/1048576 [kB] (10104 kBps) Copying: 382/1024 [MB] (15 MBps) Copying: 399/1024 [MB] (16 MBps) Copying: 409/1024 [MB] (10 MBps) Copying: 435/1024 [MB] (26 MBps) Copying: 447/1024 [MB] (11 MBps) Copying: 458/1024 [MB] (11 MBps) Copying: 470/1024 [MB] (11 MBps) Copying: 485/1024 [MB] (14 MBps) Copying: 501/1024 [MB] (15 MBps) Copying: 513/1024 [MB] (11 MBps) Copying: 528/1024 [MB] (15 MBps) Copying: 548/1024 [MB] (20 MBps) Copying: 561/1024 [MB] (12 MBps) Copying: 579/1024 [MB] (18 MBps) Copying: 596/1024 [MB] (17 MBps) Copying: 621/1024 [MB] (25 MBps) Copying: 638/1024 [MB] (16 MBps) Copying: 650/1024 [MB] (12 MBps) Copying: 676/1024 [MB] (26 MBps) Copying: 695/1024 [MB] (18 MBps) Copying: 708/1024 [MB] (13 MBps) Copying: 721/1024 [MB] (13 MBps) Copying: 734/1024 [MB] (12 MBps) Copying: 747/1024 [MB] (12 MBps) Copying: 758/1024 [MB] (11 MBps) Copying: 768/1024 [MB] (10 MBps) Copying: 784/1024 [MB] (15 MBps) Copying: 804/1024 [MB] (20 MBps) Copying: 814/1024 [MB] (10 MBps) Copying: 827/1024 [MB] (12 MBps) Copying: 857112/1048576 [kB] (10228 kBps) Copying: 867240/1048576 [kB] (10128 kBps) Copying: 859/1024 [MB] (12 MBps) Copying: 870/1024 [MB] (10 MBps) Copying: 882/1024 [MB] (12 MBps) Copying: 892/1024 [MB] (10 MBps) Copying: 902/1024 [MB] (10 MBps) Copying: 921/1024 [MB] (18 MBps) Copying: 940/1024 [MB] (19 MBps) Copying: 966/1024 [MB] (25 MBps) Copying: 985/1024 [MB] (19 MBps) Copying: 1005/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 15 MBps)[2024-09-28 10:49:30.906681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.204 [2024-09-28 10:49:30.906840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:56.204 [2024-09-28 10:49:30.906947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:56.204 [2024-09-28 10:49:30.906997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.204 [2024-09-28 10:49:30.907044] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:56.204 [2024-09-28 10:49:30.907811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.204 [2024-09-28 10:49:30.907957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:56.204 [2024-09-28 10:49:30.908034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:28:56.204 [2024-09-28 10:49:30.908058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.204 [2024-09-28 10:49:30.911321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.204 [2024-09-28 10:49:30.911465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:56.204 [2024-09-28 10:49:30.911528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:28:56.204 [2024-09-28 10:49:30.911552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.204 [2024-09-28 10:49:30.911603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.204 [2024-09-28 10:49:30.911624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:56.204 [2024-09-28 10:49:30.911652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:56.204 [2024-09-28 10:49:30.911671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.204 [2024-09-28 10:49:30.911740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.204 [2024-09-28 10:49:30.911829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:56.204 [2024-09-28 10:49:30.911854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:56.204 [2024-09-28 10:49:30.911873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.204 [2024-09-28 10:49:30.911901] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:56.204 [2024-09-28 10:49:30.911931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:56.204 [2024-09-28 10:49:30.911977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:56.204 [2024-09-28 10:49:30.912047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:56.204 [2024-09-28 10:49:30.912081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:56.204 [2024-09-28 10:49:30.912110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:56.204 [2024-09-28 10:49:30.912138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:56.204 [2024-09-28 10:49:30.912166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:56.204 [2024-09-28 10:49:30.912195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:56.204 [2024-09-28 10:49:30.912223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.912914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.913989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.914796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:56.205 [2024-09-28 10:49:30.915231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:56.206 [2024-09-28 10:49:30.915238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:56.206 [2024-09-28 10:49:30.915255] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:56.206 [2024-09-28 10:49:30.915264] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a691307-d087-4481-8af4-9c5dad4525aa 00:28:56.206 [2024-09-28 10:49:30.915272] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:56.206 [2024-09-28 10:49:30.915280] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:56.206 [2024-09-28 10:49:30.915288] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:56.206 [2024-09-28 10:49:30.915296] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:56.206 [2024-09-28 10:49:30.915304] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:56.206 [2024-09-28 10:49:30.915313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:56.206 [2024-09-28 10:49:30.915321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:56.206 [2024-09-28 10:49:30.915328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:56.206 [2024-09-28 10:49:30.915334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:56.206 [2024-09-28 10:49:30.915342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.206 [2024-09-28 10:49:30.915350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:56.206 [2024-09-28 10:49:30.915360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.442 ms 00:28:56.206 [2024-09-28 10:49:30.915373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.917631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.206 [2024-09-28 10:49:30.917772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:56.206 [2024-09-28 10:49:30.917795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.236 ms 00:28:56.206 [2024-09-28 10:49:30.917803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.917918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.206 [2024-09-28 10:49:30.917931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:56.206 [2024-09-28 10:49:30.917941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:28:56.206 [2024-09-28 10:49:30.917947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.924503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.924663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:56.206 [2024-09-28 10:49:30.924687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.924695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.924765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.924776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:56.206 [2024-09-28 10:49:30.924788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.924795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.924854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.924864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:56.206 [2024-09-28 10:49:30.924872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.924880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.924894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.924902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:56.206 [2024-09-28 10:49:30.924913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.924920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.938707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.938767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:56.206 [2024-09-28 10:49:30.938779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.938788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.949770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.950013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:56.206 [2024-09-28 10:49:30.950044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.950053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.950112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.950130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:56.206 [2024-09-28 10:49:30.950147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.950156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.950194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.950204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:56.206 [2024-09-28 10:49:30.950213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.950221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.950287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.950297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:56.206 [2024-09-28 10:49:30.950321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.950333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.950364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.950373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:56.206 [2024-09-28 10:49:30.950382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.950390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.950434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.950443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:56.206 [2024-09-28 10:49:30.950452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.950460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.950507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.206 [2024-09-28 10:49:30.950518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:56.206 [2024-09-28 10:49:30.950528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.206 [2024-09-28 10:49:30.950537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.206 [2024-09-28 10:49:30.950674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 43.957 ms, result 0 00:28:56.468 00:28:56.468 00:28:56.468 10:49:31 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:56.729 [2024-09-28 10:49:31.305539] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:28:56.730 [2024-09-28 10:49:31.305686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94933 ] 00:28:56.730 [2024-09-28 10:49:31.439619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:56.730 [2024-09-28 10:49:31.461958] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.990 [2024-09-28 10:49:31.512467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.990 [2024-09-28 10:49:31.632382] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:56.990 [2024-09-28 10:49:31.632462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:57.253 [2024-09-28 10:49:31.793206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.793418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:57.253 [2024-09-28 10:49:31.793443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:57.253 [2024-09-28 10:49:31.793453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.793521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.793532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:57.253 [2024-09-28 10:49:31.793542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:57.253 [2024-09-28 10:49:31.793550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.793574] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:57.253 [2024-09-28 10:49:31.793815] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:57.253 [2024-09-28 10:49:31.793836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.793847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:57.253 [2024-09-28 10:49:31.793857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:28:57.253 [2024-09-28 10:49:31.793868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.794194] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:57.253 [2024-09-28 10:49:31.794223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.794234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:57.253 [2024-09-28 10:49:31.794245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:57.253 [2024-09-28 10:49:31.794262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.794329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.794347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:57.253 [2024-09-28 10:49:31.794357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:57.253 [2024-09-28 10:49:31.794366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.794618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.794630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:57.253 [2024-09-28 10:49:31.794639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:28:57.253 [2024-09-28 10:49:31.794647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.794720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.794729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:57.253 [2024-09-28 10:49:31.794738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:57.253 [2024-09-28 10:49:31.794746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.794769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.794782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:57.253 [2024-09-28 10:49:31.794794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:57.253 [2024-09-28 10:49:31.794802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.794820] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:57.253 [2024-09-28 10:49:31.796879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.797157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:57.253 [2024-09-28 10:49:31.797182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:28:57.253 [2024-09-28 10:49:31.797190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.797237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.797246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:57.253 [2024-09-28 10:49:31.797254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:57.253 [2024-09-28 10:49:31.797262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.797310] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:57.253 [2024-09-28 10:49:31.797333] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:57.253 [2024-09-28 10:49:31.797377] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:57.253 [2024-09-28 10:49:31.797393] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:57.253 [2024-09-28 10:49:31.797496] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:57.253 [2024-09-28 10:49:31.797507] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:57.253 [2024-09-28 10:49:31.797518] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:57.253 [2024-09-28 10:49:31.797528] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:57.253 [2024-09-28 10:49:31.797541] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:57.253 [2024-09-28 10:49:31.797553] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:57.253 [2024-09-28 10:49:31.797561] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:57.253 [2024-09-28 10:49:31.797568] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:57.253 [2024-09-28 10:49:31.797576] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:57.253 [2024-09-28 10:49:31.797588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.797597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:57.253 [2024-09-28 10:49:31.797610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:28:57.253 [2024-09-28 10:49:31.797619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.797706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.253 [2024-09-28 10:49:31.797716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:57.253 [2024-09-28 10:49:31.797728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:57.253 [2024-09-28 10:49:31.797739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.253 [2024-09-28 10:49:31.797836] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:57.253 [2024-09-28 10:49:31.797847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:57.253 [2024-09-28 10:49:31.797862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:57.253 [2024-09-28 10:49:31.797870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.253 [2024-09-28 10:49:31.797880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:57.253 [2024-09-28 10:49:31.797889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:57.253 [2024-09-28 10:49:31.797897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:57.254 [2024-09-28 10:49:31.797905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:57.254 [2024-09-28 10:49:31.797914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:57.254 [2024-09-28 10:49:31.797922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:57.254 [2024-09-28 10:49:31.797937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:57.254 [2024-09-28 10:49:31.797945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:57.254 [2024-09-28 10:49:31.797953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:57.254 [2024-09-28 10:49:31.797987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:57.254 [2024-09-28 10:49:31.797997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:57.254 [2024-09-28 10:49:31.798005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.254 [2024-09-28 10:49:31.798013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:57.254 [2024-09-28 10:49:31.798022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:57.254 [2024-09-28 10:49:31.798033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.254 [2024-09-28 10:49:31.798040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:57.254 [2024-09-28 10:49:31.798048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:57.254 [2024-09-28 10:49:31.798055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.254 [2024-09-28 10:49:31.798065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:57.254 [2024-09-28 10:49:31.798073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:57.254 [2024-09-28 10:49:31.798080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.254 [2024-09-28 10:49:31.798087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:57.254 [2024-09-28 10:49:31.798093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:57.254 [2024-09-28 10:49:31.798100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.254 [2024-09-28 10:49:31.798107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:57.254 [2024-09-28 10:49:31.798114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:57.254 [2024-09-28 10:49:31.798120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.254 [2024-09-28 10:49:31.798127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:57.254 [2024-09-28 10:49:31.798134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:57.254 [2024-09-28 10:49:31.798142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:57.254 [2024-09-28 10:49:31.798156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:57.254 [2024-09-28 10:49:31.798164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:57.254 [2024-09-28 10:49:31.798170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:57.254 [2024-09-28 10:49:31.798177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:57.254 [2024-09-28 10:49:31.798183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:57.254 [2024-09-28 10:49:31.798190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.254 [2024-09-28 10:49:31.798196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:57.254 [2024-09-28 10:49:31.798203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:57.254 [2024-09-28 10:49:31.798210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.254 [2024-09-28 10:49:31.798217] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:57.254 [2024-09-28 10:49:31.798224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:57.254 [2024-09-28 10:49:31.798237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:57.254 [2024-09-28 10:49:31.798244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.254 [2024-09-28 10:49:31.798255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:57.254 [2024-09-28 10:49:31.798262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:57.254 [2024-09-28 10:49:31.798269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:57.254 [2024-09-28 10:49:31.798279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:57.254 [2024-09-28 10:49:31.798293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:57.254 [2024-09-28 10:49:31.798300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:57.254 [2024-09-28 10:49:31.798323] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:57.254 [2024-09-28 10:49:31.798334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:57.254 [2024-09-28 10:49:31.798350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:57.254 [2024-09-28 10:49:31.798358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:57.254 [2024-09-28 10:49:31.798365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:57.254 [2024-09-28 10:49:31.798373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:57.254 [2024-09-28 10:49:31.798381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:57.254 [2024-09-28 10:49:31.798388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:57.254 [2024-09-28 10:49:31.798396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:57.254 [2024-09-28 10:49:31.798403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:57.254 [2024-09-28 10:49:31.798411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:57.254 [2024-09-28 10:49:31.798418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:57.254 [2024-09-28 10:49:31.798425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:57.254 [2024-09-28 10:49:31.798435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:57.254 [2024-09-28 10:49:31.798443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:57.254 [2024-09-28 10:49:31.798452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:57.254 [2024-09-28 10:49:31.798459] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:57.254 [2024-09-28 10:49:31.798467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:57.254 [2024-09-28 10:49:31.798476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:57.254 [2024-09-28 10:49:31.798484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:57.254 [2024-09-28 10:49:31.798491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:57.254 [2024-09-28 10:49:31.798499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:57.254 [2024-09-28 10:49:31.798507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.254 [2024-09-28 10:49:31.798515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:57.254 [2024-09-28 10:49:31.798524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:28:57.254 [2024-09-28 10:49:31.798531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.254 [2024-09-28 10:49:31.818346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.254 [2024-09-28 10:49:31.818579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:57.254 [2024-09-28 10:49:31.818675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.768 ms 00:28:57.254 [2024-09-28 10:49:31.818728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.254 [2024-09-28 10:49:31.818893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.254 [2024-09-28 10:49:31.819058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:57.254 [2024-09-28 10:49:31.819098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:28:57.254 [2024-09-28 10:49:31.819128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.254 [2024-09-28 10:49:31.831217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.254 [2024-09-28 10:49:31.831371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:57.254 [2024-09-28 10:49:31.831425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.968 ms 00:28:57.254 [2024-09-28 10:49:31.831447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.254 [2024-09-28 10:49:31.831495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.254 [2024-09-28 10:49:31.831518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:57.254 [2024-09-28 10:49:31.831538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:57.254 [2024-09-28 10:49:31.831558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.254 [2024-09-28 10:49:31.831674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.254 [2024-09-28 10:49:31.831815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:57.254 [2024-09-28 10:49:31.831842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:57.254 [2024-09-28 10:49:31.831861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.254 [2024-09-28 10:49:31.832019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.254 [2024-09-28 10:49:31.832044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:57.254 [2024-09-28 10:49:31.832127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:28:57.254 [2024-09-28 10:49:31.832151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.254 [2024-09-28 10:49:31.838814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.254 [2024-09-28 10:49:31.838976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:57.254 [2024-09-28 10:49:31.839047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.629 ms 00:28:57.254 [2024-09-28 10:49:31.839075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.254 [2024-09-28 10:49:31.839205] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:57.255 [2024-09-28 10:49:31.839244] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:57.255 [2024-09-28 10:49:31.839340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.839361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:57.255 [2024-09-28 10:49:31.839387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:28:57.255 [2024-09-28 10:49:31.839406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.851723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.851866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:57.255 [2024-09-28 10:49:31.851938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.283 ms 00:28:57.255 [2024-09-28 10:49:31.851977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.852119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.852142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:57.255 [2024-09-28 10:49:31.852215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:57.255 [2024-09-28 10:49:31.852243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.852313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.852679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:57.255 [2024-09-28 10:49:31.852744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:57.255 [2024-09-28 10:49:31.853130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.853598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.853719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:57.255 [2024-09-28 10:49:31.853792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:28:57.255 [2024-09-28 10:49:31.853820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.853855] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:57.255 [2024-09-28 10:49:31.853888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.853908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:57.255 [2024-09-28 10:49:31.853931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:57.255 [2024-09-28 10:49:31.853950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.863313] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:57.255 [2024-09-28 10:49:31.863571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.863604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:57.255 [2024-09-28 10:49:31.863615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.560 ms 00:28:57.255 [2024-09-28 10:49:31.863625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.866215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.866359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:57.255 [2024-09-28 10:49:31.866377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:28:57.255 [2024-09-28 10:49:31.866384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.866493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.866505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:57.255 [2024-09-28 10:49:31.866514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:57.255 [2024-09-28 10:49:31.866522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.866550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.866558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:57.255 [2024-09-28 10:49:31.866567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:57.255 [2024-09-28 10:49:31.866575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.866614] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:57.255 [2024-09-28 10:49:31.866627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.866635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:57.255 [2024-09-28 10:49:31.866643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:57.255 [2024-09-28 10:49:31.866651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.872238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.872292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:57.255 [2024-09-28 10:49:31.872304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.566 ms 00:28:57.255 [2024-09-28 10:49:31.872312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.872399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.255 [2024-09-28 10:49:31.872408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:57.255 [2024-09-28 10:49:31.872421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:57.255 [2024-09-28 10:49:31.872432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.255 [2024-09-28 10:49:31.873545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.897 ms, result 0 00:29:59.129  Copying: 16/1024 [MB] (16 MBps) Copying: 34/1024 [MB] (18 MBps) Copying: 56/1024 [MB] (22 MBps) Copying: 73/1024 [MB] (17 MBps) Copying: 92/1024 [MB] (18 MBps) Copying: 107/1024 [MB] (15 MBps) Copying: 118/1024 [MB] (11 MBps) Copying: 130/1024 [MB] (11 MBps) Copying: 143/1024 [MB] (13 MBps) Copying: 176/1024 [MB] (32 MBps) Copying: 198/1024 [MB] (22 MBps) Copying: 219/1024 [MB] (20 MBps) Copying: 238/1024 [MB] (19 MBps) Copying: 263/1024 [MB] (24 MBps) Copying: 285/1024 [MB] (22 MBps) Copying: 305/1024 [MB] (19 MBps) Copying: 327/1024 [MB] (21 MBps) Copying: 340/1024 [MB] (13 MBps) Copying: 361/1024 [MB] (20 MBps) Copying: 382/1024 [MB] (20 MBps) Copying: 399/1024 [MB] (16 MBps) Copying: 421/1024 [MB] (22 MBps) Copying: 444/1024 [MB] (22 MBps) Copying: 456/1024 [MB] (11 MBps) Copying: 467/1024 [MB] (10 MBps) Copying: 478/1024 [MB] (11 MBps) Copying: 489/1024 [MB] (10 MBps) Copying: 510/1024 [MB] (21 MBps) Copying: 521/1024 [MB] (11 MBps) Copying: 532/1024 [MB] (10 MBps) Copying: 543/1024 [MB] (11 MBps) Copying: 554/1024 [MB] (11 MBps) Copying: 565/1024 [MB] (10 MBps) Copying: 577/1024 [MB] (12 MBps) Copying: 588/1024 [MB] (10 MBps) Copying: 606/1024 [MB] (17 MBps) Copying: 623/1024 [MB] (17 MBps) Copying: 642/1024 [MB] (19 MBps) Copying: 659/1024 [MB] (16 MBps) Copying: 678/1024 [MB] (19 MBps) Copying: 688/1024 [MB] (10 MBps) Copying: 699/1024 [MB] (10 MBps) Copying: 709/1024 [MB] (10 MBps) Copying: 720/1024 [MB] (10 MBps) Copying: 735/1024 [MB] (14 MBps) Copying: 746/1024 [MB] (11 MBps) Copying: 764/1024 [MB] (18 MBps) Copying: 775/1024 [MB] (10 MBps) Copying: 793/1024 [MB] (18 MBps) Copying: 807/1024 [MB] (14 MBps) Copying: 818/1024 [MB] (10 MBps) Copying: 828/1024 [MB] (10 MBps) Copying: 839/1024 [MB] (10 MBps) Copying: 859/1024 [MB] (19 MBps) Copying: 883/1024 [MB] (24 MBps) Copying: 904/1024 [MB] (20 MBps) Copying: 921/1024 [MB] (17 MBps) Copying: 944/1024 [MB] (22 MBps) Copying: 966/1024 [MB] (22 MBps) Copying: 990/1024 [MB] (23 MBps) Copying: 1018/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 16 MBps)[2024-09-28 10:50:33.705335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-09-28 10:50:33.705893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:59.129 [2024-09-28 10:50:33.705922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:59.129 [2024-09-28 10:50:33.705933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-09-28 10:50:33.705996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:59.129 [2024-09-28 10:50:33.706801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-09-28 10:50:33.706831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:59.129 [2024-09-28 10:50:33.706853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:29:59.129 [2024-09-28 10:50:33.706863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-09-28 10:50:33.707125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-09-28 10:50:33.707137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:59.129 [2024-09-28 10:50:33.707147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:29:59.129 [2024-09-28 10:50:33.707156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-09-28 10:50:33.707188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-09-28 10:50:33.707202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:59.129 [2024-09-28 10:50:33.707212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:59.129 [2024-09-28 10:50:33.707220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-09-28 10:50:33.707281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-09-28 10:50:33.707291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:59.129 [2024-09-28 10:50:33.707300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:59.129 [2024-09-28 10:50:33.707308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-09-28 10:50:33.707323] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:59.129 [2024-09-28 10:50:33.707336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:59.129 [2024-09-28 10:50:33.707472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.707994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:59.130 [2024-09-28 10:50:33.708153] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:59.130 [2024-09-28 10:50:33.708168] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a691307-d087-4481-8af4-9c5dad4525aa 00:29:59.130 [2024-09-28 10:50:33.708177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:59.130 [2024-09-28 10:50:33.708185] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:59.130 [2024-09-28 10:50:33.708201] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:59.130 [2024-09-28 10:50:33.708213] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:59.130 [2024-09-28 10:50:33.708221] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:59.130 [2024-09-28 10:50:33.708230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:59.130 [2024-09-28 10:50:33.708239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:59.130 [2024-09-28 10:50:33.708246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:59.131 [2024-09-28 10:50:33.708252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:59.131 [2024-09-28 10:50:33.708264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.131 [2024-09-28 10:50:33.708273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:59.131 [2024-09-28 10:50:33.708281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:29:59.131 [2024-09-28 10:50:33.708289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.710853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.131 [2024-09-28 10:50:33.710891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:59.131 [2024-09-28 10:50:33.710911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:29:59.131 [2024-09-28 10:50:33.710920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.711057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.131 [2024-09-28 10:50:33.711066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:59.131 [2024-09-28 10:50:33.711076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:29:59.131 [2024-09-28 10:50:33.711087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.718297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.718338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:59.131 [2024-09-28 10:50:33.718355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.718363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.718429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.718439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:59.131 [2024-09-28 10:50:33.718453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.718464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.718525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.718536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:59.131 [2024-09-28 10:50:33.718546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.718554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.718572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.718581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:59.131 [2024-09-28 10:50:33.718590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.718599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.731685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.731735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:59.131 [2024-09-28 10:50:33.731746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.731754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.741904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.742104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:59.131 [2024-09-28 10:50:33.742123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.742139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.742186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.742195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:59.131 [2024-09-28 10:50:33.742204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.742211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.742245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.742254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:59.131 [2024-09-28 10:50:33.742282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.742290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.742348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.742358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:59.131 [2024-09-28 10:50:33.742367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.742375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.742398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.742407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:59.131 [2024-09-28 10:50:33.742416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.742423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.742460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.742477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:59.131 [2024-09-28 10:50:33.742492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.742500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.742541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.131 [2024-09-28 10:50:33.742550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:59.131 [2024-09-28 10:50:33.742559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.131 [2024-09-28 10:50:33.742567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.131 [2024-09-28 10:50:33.742695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 37.327 ms, result 0 00:29:59.392 00:29:59.392 00:29:59.392 10:50:33 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:01.942 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:01.942 10:50:36 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:01.942 [2024-09-28 10:50:36.259847] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:30:01.942 [2024-09-28 10:50:36.260276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95590 ] 00:30:01.942 [2024-09-28 10:50:36.392245] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:01.942 [2024-09-28 10:50:36.414482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.942 [2024-09-28 10:50:36.452094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.942 [2024-09-28 10:50:36.547042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:01.942 [2024-09-28 10:50:36.547112] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:01.942 [2024-09-28 10:50:36.708234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.708304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:01.942 [2024-09-28 10:50:36.708320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:01.942 [2024-09-28 10:50:36.708329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.708393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.708405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:01.942 [2024-09-28 10:50:36.708414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:01.942 [2024-09-28 10:50:36.708423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.708448] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:01.942 [2024-09-28 10:50:36.708731] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:01.942 [2024-09-28 10:50:36.708754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.708765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:01.942 [2024-09-28 10:50:36.708774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:30:01.942 [2024-09-28 10:50:36.708788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.709159] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:01.942 [2024-09-28 10:50:36.709196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.709209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:01.942 [2024-09-28 10:50:36.709223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:01.942 [2024-09-28 10:50:36.709240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.709314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.709333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:01.942 [2024-09-28 10:50:36.709346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:01.942 [2024-09-28 10:50:36.709359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.709722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.709746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:01.942 [2024-09-28 10:50:36.709756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:30:01.942 [2024-09-28 10:50:36.709765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.709852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.709862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:01.942 [2024-09-28 10:50:36.709872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:01.942 [2024-09-28 10:50:36.709884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.709914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.709928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:01.942 [2024-09-28 10:50:36.709939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:01.942 [2024-09-28 10:50:36.709947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.709987] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:01.942 [2024-09-28 10:50:36.712400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.712596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:01.942 [2024-09-28 10:50:36.712615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.418 ms 00:30:01.942 [2024-09-28 10:50:36.712624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.712667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.712683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:01.942 [2024-09-28 10:50:36.712692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:01.942 [2024-09-28 10:50:36.712701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.712763] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:01.942 [2024-09-28 10:50:36.712788] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:01.942 [2024-09-28 10:50:36.712837] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:01.942 [2024-09-28 10:50:36.712853] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:01.942 [2024-09-28 10:50:36.712982] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:01.942 [2024-09-28 10:50:36.712995] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:01.942 [2024-09-28 10:50:36.713006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:01.942 [2024-09-28 10:50:36.713017] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:01.942 [2024-09-28 10:50:36.713030] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:01.942 [2024-09-28 10:50:36.713042] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:01.942 [2024-09-28 10:50:36.713049] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:01.942 [2024-09-28 10:50:36.713058] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:01.942 [2024-09-28 10:50:36.713066] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:01.942 [2024-09-28 10:50:36.713074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.713082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:01.942 [2024-09-28 10:50:36.713090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:30:01.942 [2024-09-28 10:50:36.713098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.713186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-09-28 10:50:36.713195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:01.942 [2024-09-28 10:50:36.713207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:01.942 [2024-09-28 10:50:36.713214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-09-28 10:50:36.713315] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:01.942 [2024-09-28 10:50:36.713331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:01.942 [2024-09-28 10:50:36.713345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:01.942 [2024-09-28 10:50:36.713353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.942 [2024-09-28 10:50:36.713362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:01.942 [2024-09-28 10:50:36.713369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:01.942 [2024-09-28 10:50:36.713377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:01.942 [2024-09-28 10:50:36.713384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:01.942 [2024-09-28 10:50:36.713392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:01.942 [2024-09-28 10:50:36.713398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:01.942 [2024-09-28 10:50:36.713411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:01.942 [2024-09-28 10:50:36.713419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:01.942 [2024-09-28 10:50:36.713426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:01.942 [2024-09-28 10:50:36.713432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:01.942 [2024-09-28 10:50:36.713440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:01.942 [2024-09-28 10:50:36.713449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.942 [2024-09-28 10:50:36.713456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:01.942 [2024-09-28 10:50:36.713463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:01.942 [2024-09-28 10:50:36.713474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.942 [2024-09-28 10:50:36.713481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:01.942 [2024-09-28 10:50:36.713488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:01.942 [2024-09-28 10:50:36.713496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:01.942 [2024-09-28 10:50:36.713502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:01.942 [2024-09-28 10:50:36.713509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:01.942 [2024-09-28 10:50:36.713516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:01.942 [2024-09-28 10:50:36.713522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:01.943 [2024-09-28 10:50:36.713529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:01.943 [2024-09-28 10:50:36.713536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:01.943 [2024-09-28 10:50:36.713543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:01.943 [2024-09-28 10:50:36.713550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:01.943 [2024-09-28 10:50:36.713556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:01.943 [2024-09-28 10:50:36.713563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:01.943 [2024-09-28 10:50:36.713569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:01.943 [2024-09-28 10:50:36.713576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:01.943 [2024-09-28 10:50:36.713588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:01.943 [2024-09-28 10:50:36.713595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:01.943 [2024-09-28 10:50:36.713601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:01.943 [2024-09-28 10:50:36.713608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:01.943 [2024-09-28 10:50:36.713614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:01.943 [2024-09-28 10:50:36.713621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.943 [2024-09-28 10:50:36.713628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:01.943 [2024-09-28 10:50:36.713635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:01.943 [2024-09-28 10:50:36.713641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.943 [2024-09-28 10:50:36.713647] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:01.943 [2024-09-28 10:50:36.713655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:01.943 [2024-09-28 10:50:36.713662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:01.943 [2024-09-28 10:50:36.713670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.943 [2024-09-28 10:50:36.713687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:01.943 [2024-09-28 10:50:36.713695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:01.943 [2024-09-28 10:50:36.713702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:01.943 [2024-09-28 10:50:36.713711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:01.943 [2024-09-28 10:50:36.713719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:01.943 [2024-09-28 10:50:36.713726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:01.943 [2024-09-28 10:50:36.713735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:01.943 [2024-09-28 10:50:36.713744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:01.943 [2024-09-28 10:50:36.713753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:01.943 [2024-09-28 10:50:36.713760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:01.943 [2024-09-28 10:50:36.713767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:01.943 [2024-09-28 10:50:36.713775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:01.943 [2024-09-28 10:50:36.713783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:01.943 [2024-09-28 10:50:36.713790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:01.943 [2024-09-28 10:50:36.713798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:01.943 [2024-09-28 10:50:36.713805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:01.943 [2024-09-28 10:50:36.713813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:01.943 [2024-09-28 10:50:36.713822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:01.943 [2024-09-28 10:50:36.713829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:01.943 [2024-09-28 10:50:36.713840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:01.943 [2024-09-28 10:50:36.713848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:01.943 [2024-09-28 10:50:36.713856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:01.943 [2024-09-28 10:50:36.713864] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:01.943 [2024-09-28 10:50:36.713873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:01.943 [2024-09-28 10:50:36.713885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:01.943 [2024-09-28 10:50:36.713893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:01.943 [2024-09-28 10:50:36.713901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:01.943 [2024-09-28 10:50:36.713909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:01.943 [2024-09-28 10:50:36.713917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-09-28 10:50:36.713924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:01.943 [2024-09-28 10:50:36.713932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:30:01.943 [2024-09-28 10:50:36.713939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.206 [2024-09-28 10:50:36.743002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.206 [2024-09-28 10:50:36.743095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:02.206 [2024-09-28 10:50:36.743135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.992 ms 00:30:02.206 [2024-09-28 10:50:36.743152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.206 [2024-09-28 10:50:36.743353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.206 [2024-09-28 10:50:36.743376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:02.206 [2024-09-28 10:50:36.743399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:30:02.206 [2024-09-28 10:50:36.743419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.206 [2024-09-28 10:50:36.756526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.206 [2024-09-28 10:50:36.756591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:02.206 [2024-09-28 10:50:36.756603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.967 ms 00:30:02.206 [2024-09-28 10:50:36.756611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.206 [2024-09-28 10:50:36.756649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.206 [2024-09-28 10:50:36.756663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:02.206 [2024-09-28 10:50:36.756672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:02.206 [2024-09-28 10:50:36.756680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.206 [2024-09-28 10:50:36.756783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.756794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:02.207 [2024-09-28 10:50:36.756807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:02.207 [2024-09-28 10:50:36.756816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.756943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.756952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:02.207 [2024-09-28 10:50:36.757012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:30:02.207 [2024-09-28 10:50:36.757022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.764556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.764607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:02.207 [2024-09-28 10:50:36.764618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.511 ms 00:30:02.207 [2024-09-28 10:50:36.764635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.764756] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:02.207 [2024-09-28 10:50:36.764770] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:02.207 [2024-09-28 10:50:36.764779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.764788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:02.207 [2024-09-28 10:50:36.764797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:02.207 [2024-09-28 10:50:36.764805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.777128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.777169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:02.207 [2024-09-28 10:50:36.777192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.302 ms 00:30:02.207 [2024-09-28 10:50:36.777208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.777342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.777351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:02.207 [2024-09-28 10:50:36.777360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:30:02.207 [2024-09-28 10:50:36.777372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.777430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.777440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:02.207 [2024-09-28 10:50:36.777453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:02.207 [2024-09-28 10:50:36.777462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.777773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.777785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:02.207 [2024-09-28 10:50:36.777801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:30:02.207 [2024-09-28 10:50:36.777811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.777831] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:02.207 [2024-09-28 10:50:36.777844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.777852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:02.207 [2024-09-28 10:50:36.777866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:02.207 [2024-09-28 10:50:36.777874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.787448] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:02.207 [2024-09-28 10:50:36.787620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.787632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:02.207 [2024-09-28 10:50:36.787643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.727 ms 00:30:02.207 [2024-09-28 10:50:36.787651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.790172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.790212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:02.207 [2024-09-28 10:50:36.790223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:30:02.207 [2024-09-28 10:50:36.790231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.790347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.790358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:02.207 [2024-09-28 10:50:36.790369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:02.207 [2024-09-28 10:50:36.790376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.790404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.790413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:02.207 [2024-09-28 10:50:36.790421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:02.207 [2024-09-28 10:50:36.790428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.790461] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:02.207 [2024-09-28 10:50:36.790473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.790481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:02.207 [2024-09-28 10:50:36.790489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:02.207 [2024-09-28 10:50:36.790497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.797257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.797454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:02.207 [2024-09-28 10:50:36.797473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.738 ms 00:30:02.207 [2024-09-28 10:50:36.797482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.797567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.207 [2024-09-28 10:50:36.797577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:02.207 [2024-09-28 10:50:36.797586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:02.207 [2024-09-28 10:50:36.797597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.207 [2024-09-28 10:50:36.798850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 90.152 ms, result 0 00:31:01.392  Copying: 22/1024 [MB] (22 MBps) Copying: 38/1024 [MB] (16 MBps) Copying: 49/1024 [MB] (10 MBps) Copying: 59/1024 [MB] (10 MBps) Copying: 69/1024 [MB] (10 MBps) Copying: 82/1024 [MB] (12 MBps) Copying: 92/1024 [MB] (10 MBps) Copying: 104952/1048576 [kB] (10240 kBps) Copying: 113/1024 [MB] (10 MBps) Copying: 123/1024 [MB] (10 MBps) Copying: 134/1024 [MB] (11 MBps) Copying: 145/1024 [MB] (10 MBps) Copying: 156/1024 [MB] (11 MBps) Copying: 192/1024 [MB] (36 MBps) Copying: 224/1024 [MB] (31 MBps) Copying: 239/1024 [MB] (15 MBps) Copying: 255/1024 [MB] (16 MBps) Copying: 268/1024 [MB] (13 MBps) Copying: 279/1024 [MB] (10 MBps) Copying: 298/1024 [MB] (18 MBps) Copying: 314/1024 [MB] (16 MBps) Copying: 326/1024 [MB] (11 MBps) Copying: 342/1024 [MB] (16 MBps) Copying: 362/1024 [MB] (19 MBps) Copying: 380/1024 [MB] (17 MBps) Copying: 393/1024 [MB] (13 MBps) Copying: 410/1024 [MB] (17 MBps) Copying: 432/1024 [MB] (21 MBps) Copying: 443/1024 [MB] (11 MBps) Copying: 453/1024 [MB] (10 MBps) Copying: 473/1024 [MB] (19 MBps) Copying: 492/1024 [MB] (18 MBps) Copying: 511/1024 [MB] (18 MBps) Copying: 534/1024 [MB] (23 MBps) Copying: 557/1024 [MB] (23 MBps) Copying: 580/1024 [MB] (22 MBps) Copying: 602/1024 [MB] (22 MBps) Copying: 624/1024 [MB] (22 MBps) Copying: 635/1024 [MB] (11 MBps) Copying: 653/1024 [MB] (17 MBps) Copying: 670/1024 [MB] (17 MBps) Copying: 693/1024 [MB] (22 MBps) Copying: 708/1024 [MB] (15 MBps) Copying: 729/1024 [MB] (20 MBps) Copying: 742/1024 [MB] (13 MBps) Copying: 770792/1048576 [kB] (10164 kBps) Copying: 781016/1048576 [kB] (10224 kBps) Copying: 772/1024 [MB] (10 MBps) Copying: 783/1024 [MB] (10 MBps) Copying: 799/1024 [MB] (16 MBps) Copying: 814/1024 [MB] (15 MBps) Copying: 829/1024 [MB] (14 MBps) Copying: 866/1024 [MB] (36 MBps) Copying: 904/1024 [MB] (38 MBps) Copying: 937/1024 [MB] (33 MBps) Copying: 969/1024 [MB] (31 MBps) Copying: 1008/1024 [MB] (39 MBps) Copying: 1022/1024 [MB] (14 MBps) Copying: 1048464/1048576 [kB] (1256 kBps) Copying: 1024/1024 [MB] (average 17 MBps)[2024-09-28 10:51:35.957049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.392 [2024-09-28 10:51:35.957131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:01.392 [2024-09-28 10:51:35.957147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:01.392 [2024-09-28 10:51:35.957158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.392 [2024-09-28 10:51:35.958603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:01.392 [2024-09-28 10:51:35.961425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.392 [2024-09-28 10:51:35.961479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:01.392 [2024-09-28 10:51:35.961492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.761 ms 00:31:01.392 [2024-09-28 10:51:35.961503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.392 [2024-09-28 10:51:35.972851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.392 [2024-09-28 10:51:35.972944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:01.392 [2024-09-28 10:51:35.972991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.561 ms 00:31:01.392 [2024-09-28 10:51:35.973002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.392 [2024-09-28 10:51:35.973037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.392 [2024-09-28 10:51:35.973048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:01.392 [2024-09-28 10:51:35.973058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:01.392 [2024-09-28 10:51:35.973066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.392 [2024-09-28 10:51:35.973127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.392 [2024-09-28 10:51:35.973138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:01.392 [2024-09-28 10:51:35.973156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:01.392 [2024-09-28 10:51:35.973164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.392 [2024-09-28 10:51:35.973178] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:01.392 [2024-09-28 10:51:35.973190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:31:01.392 [2024-09-28 10:51:35.973213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:01.392 [2024-09-28 10:51:35.973746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.973995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.974003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.974011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.974019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:01.393 [2024-09-28 10:51:35.974036] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:01.393 [2024-09-28 10:51:35.974045] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a691307-d087-4481-8af4-9c5dad4525aa 00:31:01.393 [2024-09-28 10:51:35.974054] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:31:01.393 [2024-09-28 10:51:35.974064] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128800 00:31:01.393 [2024-09-28 10:51:35.974072] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:31:01.393 [2024-09-28 10:51:35.974085] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:01.393 [2024-09-28 10:51:35.974093] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:01.393 [2024-09-28 10:51:35.974104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:01.393 [2024-09-28 10:51:35.974112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:01.393 [2024-09-28 10:51:35.974121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:01.393 [2024-09-28 10:51:35.974129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:01.393 [2024-09-28 10:51:35.974136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.393 [2024-09-28 10:51:35.974144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:01.393 [2024-09-28 10:51:35.974156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:31:01.393 [2024-09-28 10:51:35.974165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:35.976743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.393 [2024-09-28 10:51:35.976782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:01.393 [2024-09-28 10:51:35.976793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:31:01.393 [2024-09-28 10:51:35.976817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:35.976948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.393 [2024-09-28 10:51:35.976979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:01.393 [2024-09-28 10:51:35.976991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:31:01.393 [2024-09-28 10:51:35.976999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:35.984377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:35.984558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:01.393 [2024-09-28 10:51:35.984627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:35.984651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:35.984731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:35.984752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:01.393 [2024-09-28 10:51:35.984773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:35.984794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:35.984860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:35.984884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:01.393 [2024-09-28 10:51:35.984906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:35.985010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:35.985049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:35.985239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:01.393 [2024-09-28 10:51:35.985267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:35.985289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:35.999019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:35.999201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:01.393 [2024-09-28 10:51:35.999268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:35.999291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:36.010046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:36.010223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:01.393 [2024-09-28 10:51:36.010280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:36.010303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:36.010362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:36.010386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:01.393 [2024-09-28 10:51:36.010408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:36.010427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:36.010487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:36.010510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:01.393 [2024-09-28 10:51:36.010531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:36.010595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:36.010671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:36.010695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:01.393 [2024-09-28 10:51:36.010724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:36.010837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:36.010897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:36.010921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:01.393 [2024-09-28 10:51:36.010946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:36.011034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:36.011094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:36.011118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:01.393 [2024-09-28 10:51:36.011151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:36.011276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:36.011344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.393 [2024-09-28 10:51:36.011376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:01.393 [2024-09-28 10:51:36.011484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.393 [2024-09-28 10:51:36.011539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.393 [2024-09-28 10:51:36.011708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 57.647 ms, result 0 00:31:02.336 00:31:02.336 00:31:02.336 10:51:36 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:02.336 [2024-09-28 10:51:37.021453] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:31:02.336 [2024-09-28 10:51:37.021818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96197 ] 00:31:02.597 [2024-09-28 10:51:37.155722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:02.597 [2024-09-28 10:51:37.178271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.597 [2024-09-28 10:51:37.227601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.597 [2024-09-28 10:51:37.336621] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:02.597 [2024-09-28 10:51:37.336700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:02.859 [2024-09-28 10:51:37.498704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.859 [2024-09-28 10:51:37.498770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:02.859 [2024-09-28 10:51:37.498786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:02.859 [2024-09-28 10:51:37.498795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.859 [2024-09-28 10:51:37.498857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.859 [2024-09-28 10:51:37.498869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:02.859 [2024-09-28 10:51:37.498878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:02.859 [2024-09-28 10:51:37.498887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.859 [2024-09-28 10:51:37.498914] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:02.859 [2024-09-28 10:51:37.499230] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:02.859 [2024-09-28 10:51:37.499251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.859 [2024-09-28 10:51:37.499260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:02.859 [2024-09-28 10:51:37.499274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:31:02.859 [2024-09-28 10:51:37.499286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.859 [2024-09-28 10:51:37.499526] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:02.859 [2024-09-28 10:51:37.499555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.859 [2024-09-28 10:51:37.499564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:02.859 [2024-09-28 10:51:37.499576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:02.859 [2024-09-28 10:51:37.499590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.859 [2024-09-28 10:51:37.499646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.859 [2024-09-28 10:51:37.499660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:02.859 [2024-09-28 10:51:37.499668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:02.860 [2024-09-28 10:51:37.499676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.860 [2024-09-28 10:51:37.499923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.860 [2024-09-28 10:51:37.499945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:02.860 [2024-09-28 10:51:37.499954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:31:02.860 [2024-09-28 10:51:37.499990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.860 [2024-09-28 10:51:37.500081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.860 [2024-09-28 10:51:37.500093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:02.860 [2024-09-28 10:51:37.500102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:02.860 [2024-09-28 10:51:37.500110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.860 [2024-09-28 10:51:37.500136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.860 [2024-09-28 10:51:37.500145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:02.860 [2024-09-28 10:51:37.500156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:02.860 [2024-09-28 10:51:37.500166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.860 [2024-09-28 10:51:37.500373] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:02.860 [2024-09-28 10:51:37.502609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.860 [2024-09-28 10:51:37.502809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:02.860 [2024-09-28 10:51:37.502829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:31:02.860 [2024-09-28 10:51:37.502849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.860 [2024-09-28 10:51:37.502944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.860 [2024-09-28 10:51:37.502992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:02.860 [2024-09-28 10:51:37.503005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:02.860 [2024-09-28 10:51:37.503013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.860 [2024-09-28 10:51:37.503068] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:02.860 [2024-09-28 10:51:37.503092] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:02.860 [2024-09-28 10:51:37.503133] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:02.860 [2024-09-28 10:51:37.503154] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:02.860 [2024-09-28 10:51:37.503261] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:02.860 [2024-09-28 10:51:37.503273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:02.860 [2024-09-28 10:51:37.503285] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:02.860 [2024-09-28 10:51:37.503295] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:02.860 [2024-09-28 10:51:37.503307] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:02.860 [2024-09-28 10:51:37.503319] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:02.860 [2024-09-28 10:51:37.503327] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:02.860 [2024-09-28 10:51:37.503339] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:02.860 [2024-09-28 10:51:37.503354] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:02.860 [2024-09-28 10:51:37.503366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.860 [2024-09-28 10:51:37.503374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:02.860 [2024-09-28 10:51:37.503387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:31:02.860 [2024-09-28 10:51:37.503396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.860 [2024-09-28 10:51:37.503484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.860 [2024-09-28 10:51:37.503494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:02.860 [2024-09-28 10:51:37.503511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:02.860 [2024-09-28 10:51:37.503518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.860 [2024-09-28 10:51:37.503619] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:02.860 [2024-09-28 10:51:37.503632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:02.860 [2024-09-28 10:51:37.503641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:02.860 [2024-09-28 10:51:37.503648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:02.860 [2024-09-28 10:51:37.503668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:02.860 [2024-09-28 10:51:37.503686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:02.860 [2024-09-28 10:51:37.503694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:02.860 [2024-09-28 10:51:37.503717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:02.860 [2024-09-28 10:51:37.503725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:02.860 [2024-09-28 10:51:37.503733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:02.860 [2024-09-28 10:51:37.503740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:02.860 [2024-09-28 10:51:37.503748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:02.860 [2024-09-28 10:51:37.503756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:02.860 [2024-09-28 10:51:37.503770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:02.860 [2024-09-28 10:51:37.503778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:02.860 [2024-09-28 10:51:37.503791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.860 [2024-09-28 10:51:37.503805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:02.860 [2024-09-28 10:51:37.503814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.860 [2024-09-28 10:51:37.503828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:02.860 [2024-09-28 10:51:37.503835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.860 [2024-09-28 10:51:37.503849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:02.860 [2024-09-28 10:51:37.503856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.860 [2024-09-28 10:51:37.503869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:02.860 [2024-09-28 10:51:37.503876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:02.860 [2024-09-28 10:51:37.503891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:02.860 [2024-09-28 10:51:37.503898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:02.860 [2024-09-28 10:51:37.503904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:02.860 [2024-09-28 10:51:37.503911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:02.860 [2024-09-28 10:51:37.503917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:02.860 [2024-09-28 10:51:37.503926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:02.860 [2024-09-28 10:51:37.503939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:02.860 [2024-09-28 10:51:37.503946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.860 [2024-09-28 10:51:37.503952] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:02.860 [2024-09-28 10:51:37.503982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:02.860 [2024-09-28 10:51:37.503991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:02.860 [2024-09-28 10:51:37.503999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.860 [2024-09-28 10:51:37.504011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:02.860 [2024-09-28 10:51:37.504019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:02.860 [2024-09-28 10:51:37.504026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:02.860 [2024-09-28 10:51:37.504034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:02.860 [2024-09-28 10:51:37.504041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:02.860 [2024-09-28 10:51:37.504048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:02.860 [2024-09-28 10:51:37.504056] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:02.860 [2024-09-28 10:51:37.504070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:02.860 [2024-09-28 10:51:37.504082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:02.860 [2024-09-28 10:51:37.504090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:02.860 [2024-09-28 10:51:37.504097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:02.860 [2024-09-28 10:51:37.504106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:02.860 [2024-09-28 10:51:37.504113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:02.860 [2024-09-28 10:51:37.504121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:02.861 [2024-09-28 10:51:37.504128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:02.861 [2024-09-28 10:51:37.504136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:02.861 [2024-09-28 10:51:37.504144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:02.861 [2024-09-28 10:51:37.504152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:02.861 [2024-09-28 10:51:37.504160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:02.861 [2024-09-28 10:51:37.504168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:02.861 [2024-09-28 10:51:37.504176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:02.861 [2024-09-28 10:51:37.504183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:02.861 [2024-09-28 10:51:37.504191] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:02.861 [2024-09-28 10:51:37.504200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:02.861 [2024-09-28 10:51:37.504211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:02.861 [2024-09-28 10:51:37.504218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:02.861 [2024-09-28 10:51:37.504225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:02.861 [2024-09-28 10:51:37.504232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:02.861 [2024-09-28 10:51:37.504242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.504253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:02.861 [2024-09-28 10:51:37.504261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:31:02.861 [2024-09-28 10:51:37.504272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.521381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.521622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:02.861 [2024-09-28 10:51:37.521655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.065 ms 00:31:02.861 [2024-09-28 10:51:37.521665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.521774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.521784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:02.861 [2024-09-28 10:51:37.521796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:02.861 [2024-09-28 10:51:37.521805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.534153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.534225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:02.861 [2024-09-28 10:51:37.534238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.274 ms 00:31:02.861 [2024-09-28 10:51:37.534246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.534291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.534300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:02.861 [2024-09-28 10:51:37.534310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:02.861 [2024-09-28 10:51:37.534327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.534428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.534441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:02.861 [2024-09-28 10:51:37.534453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:02.861 [2024-09-28 10:51:37.534461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.534589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.534600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:02.861 [2024-09-28 10:51:37.534608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:31:02.861 [2024-09-28 10:51:37.534616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.541846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.541895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:02.861 [2024-09-28 10:51:37.541914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.213 ms 00:31:02.861 [2024-09-28 10:51:37.541925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.542076] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:02.861 [2024-09-28 10:51:37.542092] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:02.861 [2024-09-28 10:51:37.542104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.542112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:02.861 [2024-09-28 10:51:37.542122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:02.861 [2024-09-28 10:51:37.542129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.554640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.554685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:02.861 [2024-09-28 10:51:37.554706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.491 ms 00:31:02.861 [2024-09-28 10:51:37.554721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.554851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.554861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:02.861 [2024-09-28 10:51:37.554871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:31:02.861 [2024-09-28 10:51:37.554887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.554939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.554949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:02.861 [2024-09-28 10:51:37.554977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:02.861 [2024-09-28 10:51:37.554986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.555301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.555314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:02.861 [2024-09-28 10:51:37.555322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:31:02.861 [2024-09-28 10:51:37.555329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.555345] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:02.861 [2024-09-28 10:51:37.555356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.555370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:02.861 [2024-09-28 10:51:37.555384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:02.861 [2024-09-28 10:51:37.555392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.564668] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:02.861 [2024-09-28 10:51:37.564996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.565014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:02.861 [2024-09-28 10:51:37.565025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.586 ms 00:31:02.861 [2024-09-28 10:51:37.565034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.567720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.567762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:02.861 [2024-09-28 10:51:37.567774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.655 ms 00:31:02.861 [2024-09-28 10:51:37.567782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.567867] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:02.861 [2024-09-28 10:51:37.568654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.568757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:02.861 [2024-09-28 10:51:37.568813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:31:02.861 [2024-09-28 10:51:37.568830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.568869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.568878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:02.861 [2024-09-28 10:51:37.568888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:02.861 [2024-09-28 10:51:37.568896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.568934] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:02.861 [2024-09-28 10:51:37.568945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.568988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:02.861 [2024-09-28 10:51:37.568998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:02.861 [2024-09-28 10:51:37.569007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.575790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.575998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:02.861 [2024-09-28 10:51:37.576019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.759 ms 00:31:02.861 [2024-09-28 10:51:37.576028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.576463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.861 [2024-09-28 10:51:37.576519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:02.861 [2024-09-28 10:51:37.576533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:02.861 [2024-09-28 10:51:37.576552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.861 [2024-09-28 10:51:37.579747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 80.137 ms, result 0 00:32:14.744  Copying: 19/1024 [MB] (19 MBps) Copying: 40/1024 [MB] (21 MBps) Copying: 54/1024 [MB] (14 MBps) Copying: 80/1024 [MB] (26 MBps) Copying: 91/1024 [MB] (10 MBps) Copying: 103/1024 [MB] (11 MBps) Copying: 117/1024 [MB] (14 MBps) Copying: 129/1024 [MB] (12 MBps) Copying: 143/1024 [MB] (13 MBps) Copying: 154/1024 [MB] (11 MBps) Copying: 173/1024 [MB] (18 MBps) Copying: 194/1024 [MB] (21 MBps) Copying: 206/1024 [MB] (11 MBps) Copying: 218/1024 [MB] (11 MBps) Copying: 229/1024 [MB] (11 MBps) Copying: 240/1024 [MB] (10 MBps) Copying: 250/1024 [MB] (10 MBps) Copying: 261/1024 [MB] (10 MBps) Copying: 272/1024 [MB] (10 MBps) Copying: 282/1024 [MB] (10 MBps) Copying: 297/1024 [MB] (15 MBps) Copying: 311/1024 [MB] (13 MBps) Copying: 322/1024 [MB] (10 MBps) Copying: 338/1024 [MB] (16 MBps) Copying: 358/1024 [MB] (19 MBps) Copying: 374/1024 [MB] (15 MBps) Copying: 389/1024 [MB] (15 MBps) Copying: 405/1024 [MB] (15 MBps) Copying: 423/1024 [MB] (18 MBps) Copying: 436/1024 [MB] (13 MBps) Copying: 451/1024 [MB] (14 MBps) Copying: 467/1024 [MB] (16 MBps) Copying: 484/1024 [MB] (16 MBps) Copying: 503/1024 [MB] (18 MBps) Copying: 517/1024 [MB] (14 MBps) Copying: 531/1024 [MB] (13 MBps) Copying: 542/1024 [MB] (10 MBps) Copying: 557/1024 [MB] (15 MBps) Copying: 568/1024 [MB] (10 MBps) Copying: 584/1024 [MB] (16 MBps) Copying: 599/1024 [MB] (15 MBps) Copying: 615/1024 [MB] (15 MBps) Copying: 632/1024 [MB] (17 MBps) Copying: 645/1024 [MB] (13 MBps) Copying: 656/1024 [MB] (11 MBps) Copying: 666/1024 [MB] (10 MBps) Copying: 677/1024 [MB] (10 MBps) Copying: 688/1024 [MB] (10 MBps) Copying: 698/1024 [MB] (10 MBps) Copying: 708/1024 [MB] (10 MBps) Copying: 729/1024 [MB] (20 MBps) Copying: 747/1024 [MB] (17 MBps) Copying: 758/1024 [MB] (10 MBps) Copying: 768/1024 [MB] (10 MBps) Copying: 779/1024 [MB] (11 MBps) Copying: 790/1024 [MB] (10 MBps) Copying: 800/1024 [MB] (10 MBps) Copying: 811/1024 [MB] (10 MBps) Copying: 834/1024 [MB] (23 MBps) Copying: 845/1024 [MB] (10 MBps) Copying: 855/1024 [MB] (10 MBps) Copying: 869/1024 [MB] (14 MBps) Copying: 890/1024 [MB] (20 MBps) Copying: 908/1024 [MB] (17 MBps) Copying: 925/1024 [MB] (16 MBps) Copying: 942/1024 [MB] (16 MBps) Copying: 959/1024 [MB] (17 MBps) Copying: 974/1024 [MB] (15 MBps) Copying: 996/1024 [MB] (21 MBps) Copying: 1007/1024 [MB] (10 MBps) Copying: 1017/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 14 MBps)[2024-09-28 10:52:49.364630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.744 [2024-09-28 10:52:49.364727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:14.744 [2024-09-28 10:52:49.364750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:14.744 [2024-09-28 10:52:49.364772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.744 [2024-09-28 10:52:49.364808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:14.744 [2024-09-28 10:52:49.365682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.744 [2024-09-28 10:52:49.365914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:14.744 [2024-09-28 10:52:49.365931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:32:14.744 [2024-09-28 10:52:49.365944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.744 [2024-09-28 10:52:49.366335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.744 [2024-09-28 10:52:49.366352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:14.744 [2024-09-28 10:52:49.366365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:32:14.744 [2024-09-28 10:52:49.366377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.744 [2024-09-28 10:52:49.366417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.744 [2024-09-28 10:52:49.366430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:14.744 [2024-09-28 10:52:49.366442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:14.744 [2024-09-28 10:52:49.366453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.744 [2024-09-28 10:52:49.366531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.744 [2024-09-28 10:52:49.366548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:14.744 [2024-09-28 10:52:49.366560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:14.744 [2024-09-28 10:52:49.366570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.744 [2024-09-28 10:52:49.366588] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:14.744 [2024-09-28 10:52:49.366617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:14.744 [2024-09-28 10:52:49.366631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.366992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:14.744 [2024-09-28 10:52:49.367269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:14.745 [2024-09-28 10:52:49.367609] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:14.745 [2024-09-28 10:52:49.367618] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a691307-d087-4481-8af4-9c5dad4525aa 00:32:14.745 [2024-09-28 10:52:49.367633] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:14.745 [2024-09-28 10:52:49.367642] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2336 00:32:14.745 [2024-09-28 10:52:49.367650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2304 00:32:14.745 [2024-09-28 10:52:49.367661] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0139 00:32:14.745 [2024-09-28 10:52:49.367674] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:14.745 [2024-09-28 10:52:49.367684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:14.745 [2024-09-28 10:52:49.367693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:14.745 [2024-09-28 10:52:49.367701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:14.745 [2024-09-28 10:52:49.367709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:14.745 [2024-09-28 10:52:49.367717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.745 [2024-09-28 10:52:49.367731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:14.745 [2024-09-28 10:52:49.367741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:32:14.745 [2024-09-28 10:52:49.367749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.370203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.745 [2024-09-28 10:52:49.370242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:14.745 [2024-09-28 10:52:49.370263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.436 ms 00:32:14.745 [2024-09-28 10:52:49.370272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.370409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.745 [2024-09-28 10:52:49.370421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:14.745 [2024-09-28 10:52:49.370431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:32:14.745 [2024-09-28 10:52:49.370440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.377688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.377865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:14.745 [2024-09-28 10:52:49.377931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.377957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.378069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.378096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:14.745 [2024-09-28 10:52:49.378129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.378182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.378278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.378376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:14.745 [2024-09-28 10:52:49.378405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.378427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.378461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.378485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:14.745 [2024-09-28 10:52:49.378508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.378583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.391654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.391841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:14.745 [2024-09-28 10:52:49.391897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.391919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.402445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.402611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:14.745 [2024-09-28 10:52:49.402665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.402688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.402749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.402771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:14.745 [2024-09-28 10:52:49.402791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.402819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.402866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.402887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:14.745 [2024-09-28 10:52:49.402908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.403001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.403088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.403113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:14.745 [2024-09-28 10:52:49.403211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.403369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.403451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.403544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:14.745 [2024-09-28 10:52:49.403569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.403590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.403642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.403664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:14.745 [2024-09-28 10:52:49.403771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.745 [2024-09-28 10:52:49.403794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-09-28 10:52:49.403857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.745 [2024-09-28 10:52:49.403880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:14.745 [2024-09-28 10:52:49.403900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.746 [2024-09-28 10:52:49.403991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.746 [2024-09-28 10:52:49.404154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.489 ms, result 0 00:32:15.006 00:32:15.006 00:32:15.006 10:52:49 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:17.548 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:17.548 10:52:51 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94056 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 94056 ']' 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 94056 00:32:17.549 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (94056) - No such process 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 94056 is not found' 00:32:17.549 Process with pid 94056 is not found 00:32:17.549 Remove shared memory files 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_band_md /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_l2p_l1 /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_l2p_l2 /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_l2p_l2_ctx /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_nvc_md /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_p2l_pool /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_sb /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_sb_shm /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_trim_bitmap /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_trim_log /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_trim_md /dev/hugepages/ftl_8a691307-d087-4481-8af4-9c5dad4525aa_vmap 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:17.549 ************************************ 00:32:17.549 END TEST ftl_restore_fast 00:32:17.549 ************************************ 00:32:17.549 00:32:17.549 real 4m45.552s 00:32:17.549 user 4m32.698s 00:32:17.549 sys 0m12.309s 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:17.549 10:52:51 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:17.549 Process with pid 85083 is not found 00:32:17.549 10:52:51 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:17.549 10:52:51 ftl -- ftl/ftl.sh@14 -- # killprocess 85083 00:32:17.549 10:52:51 ftl -- common/autotest_common.sh@950 -- # '[' -z 85083 ']' 00:32:17.549 10:52:51 ftl -- common/autotest_common.sh@954 -- # kill -0 85083 00:32:17.549 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85083) - No such process 00:32:17.549 10:52:51 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 85083 is not found' 00:32:17.549 10:52:51 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:17.549 10:52:51 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96987 00:32:17.549 10:52:51 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96987 00:32:17.549 10:52:51 ftl -- common/autotest_common.sh@831 -- # '[' -z 96987 ']' 00:32:17.549 10:52:51 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.549 10:52:51 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:17.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.549 10:52:51 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.549 10:52:51 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:17.549 10:52:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:17.549 10:52:51 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.549 [2024-09-28 10:52:52.040017] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:32:17.549 [2024-09-28 10:52:52.040163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96987 ] 00:32:17.549 [2024-09-28 10:52:52.172366] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:17.549 [2024-09-28 10:52:52.191890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.549 [2024-09-28 10:52:52.242898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.118 10:52:52 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:18.118 10:52:52 ftl -- common/autotest_common.sh@864 -- # return 0 00:32:18.118 10:52:52 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:18.687 nvme0n1 00:32:18.687 10:52:53 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:18.687 10:52:53 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:18.687 10:52:53 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:18.687 10:52:53 ftl -- ftl/common.sh@28 -- # stores=2b2f5faa-1a6a-4110-a9e4-758cc14f6420 00:32:18.687 10:52:53 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:18.687 10:52:53 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b2f5faa-1a6a-4110-a9e4-758cc14f6420 00:32:18.948 10:52:53 ftl -- ftl/ftl.sh@23 -- # killprocess 96987 00:32:18.948 10:52:53 ftl -- common/autotest_common.sh@950 -- # '[' -z 96987 ']' 00:32:18.948 10:52:53 ftl -- common/autotest_common.sh@954 -- # kill -0 96987 00:32:18.948 10:52:53 ftl -- common/autotest_common.sh@955 -- # uname 00:32:18.948 10:52:53 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:18.948 10:52:53 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96987 00:32:18.948 killing process with pid 96987 00:32:18.948 10:52:53 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:18.948 10:52:53 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:18.948 10:52:53 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96987' 00:32:18.948 10:52:53 ftl -- common/autotest_common.sh@969 -- # kill 96987 00:32:18.948 10:52:53 ftl -- common/autotest_common.sh@974 -- # wait 96987 00:32:19.208 10:52:53 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:19.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:19.469 Waiting for block devices as requested 00:32:19.469 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:19.730 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:19.730 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:19.992 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:25.348 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:25.348 10:52:59 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:25.348 Remove shared memory files 00:32:25.348 10:52:59 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:25.348 10:52:59 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:25.348 10:52:59 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:25.348 10:52:59 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:25.348 10:52:59 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:25.348 10:52:59 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:25.348 ************************************ 00:32:25.348 END TEST ftl 00:32:25.348 ************************************ 00:32:25.348 00:32:25.348 real 17m52.585s 00:32:25.348 user 19m56.691s 00:32:25.348 sys 1m24.635s 00:32:25.348 10:52:59 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:25.348 10:52:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:25.348 10:52:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:25.348 10:52:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:25.348 10:52:59 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:25.348 10:52:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:25.348 10:52:59 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:25.348 10:52:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:25.348 10:52:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:25.348 10:52:59 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:25.348 10:52:59 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:25.348 10:52:59 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:25.348 10:52:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:25.348 10:52:59 -- common/autotest_common.sh@10 -- # set +x 00:32:25.348 10:52:59 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:25.348 10:52:59 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:25.348 10:52:59 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:25.348 10:52:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.743 INFO: APP EXITING 00:32:26.743 INFO: killing all VMs 00:32:26.743 INFO: killing vhost app 00:32:26.743 INFO: EXIT DONE 00:32:26.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:27.316 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:27.316 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:27.316 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:27.317 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:27.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:28.154 Cleaning 00:32:28.154 Removing: /var/run/dpdk/spdk0/config 00:32:28.154 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:28.154 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:28.154 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:28.154 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:28.154 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:28.154 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:28.154 Removing: /var/run/dpdk/spdk0 00:32:28.154 Removing: /var/run/dpdk/spdk_pid70594 00:32:28.154 Removing: /var/run/dpdk/spdk_pid70757 00:32:28.154 Removing: /var/run/dpdk/spdk_pid70959 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71041 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71069 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71175 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71193 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71376 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71449 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71534 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71629 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71709 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71743 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71780 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71846 00:32:28.154 Removing: /var/run/dpdk/spdk_pid71951 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72370 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72418 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72459 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72475 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72533 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72549 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72607 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72623 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72665 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72683 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72725 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72743 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72870 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72906 00:32:28.154 Removing: /var/run/dpdk/spdk_pid72990 00:32:28.154 Removing: /var/run/dpdk/spdk_pid73151 00:32:28.154 Removing: /var/run/dpdk/spdk_pid73218 00:32:28.154 Removing: /var/run/dpdk/spdk_pid73244 00:32:28.154 Removing: /var/run/dpdk/spdk_pid73661 00:32:28.154 Removing: /var/run/dpdk/spdk_pid73749 00:32:28.154 Removing: /var/run/dpdk/spdk_pid73854 00:32:28.154 Removing: /var/run/dpdk/spdk_pid73896 00:32:28.154 Removing: /var/run/dpdk/spdk_pid73916 00:32:28.154 Removing: /var/run/dpdk/spdk_pid74000 00:32:28.154 Removing: /var/run/dpdk/spdk_pid74608 00:32:28.154 Removing: /var/run/dpdk/spdk_pid74639 00:32:28.154 Removing: /var/run/dpdk/spdk_pid75100 00:32:28.154 Removing: /var/run/dpdk/spdk_pid75193 00:32:28.154 Removing: /var/run/dpdk/spdk_pid75291 00:32:28.154 Removing: /var/run/dpdk/spdk_pid75333 00:32:28.154 Removing: /var/run/dpdk/spdk_pid75353 00:32:28.154 Removing: /var/run/dpdk/spdk_pid75373 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77192 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77313 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77317 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77334 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77380 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77384 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77396 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77441 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77445 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77457 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77496 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77500 00:32:28.154 Removing: /var/run/dpdk/spdk_pid77512 00:32:28.154 Removing: /var/run/dpdk/spdk_pid78872 00:32:28.154 Removing: /var/run/dpdk/spdk_pid78958 00:32:28.154 Removing: /var/run/dpdk/spdk_pid80348 00:32:28.154 Removing: /var/run/dpdk/spdk_pid81745 00:32:28.154 Removing: /var/run/dpdk/spdk_pid81799 00:32:28.154 Removing: /var/run/dpdk/spdk_pid81853 00:32:28.154 Removing: /var/run/dpdk/spdk_pid81907 00:32:28.154 Removing: /var/run/dpdk/spdk_pid81984 00:32:28.154 Removing: /var/run/dpdk/spdk_pid82047 00:32:28.154 Removing: /var/run/dpdk/spdk_pid82184 00:32:28.154 Removing: /var/run/dpdk/spdk_pid82532 00:32:28.154 Removing: /var/run/dpdk/spdk_pid82552 00:32:28.154 Removing: /var/run/dpdk/spdk_pid82975 00:32:28.154 Removing: /var/run/dpdk/spdk_pid83149 00:32:28.154 Removing: /var/run/dpdk/spdk_pid83241 00:32:28.154 Removing: /var/run/dpdk/spdk_pid83341 00:32:28.154 Removing: /var/run/dpdk/spdk_pid83383 00:32:28.154 Removing: /var/run/dpdk/spdk_pid83403 00:32:28.154 Removing: /var/run/dpdk/spdk_pid83689 00:32:28.154 Removing: /var/run/dpdk/spdk_pid83727 00:32:28.154 Removing: /var/run/dpdk/spdk_pid83777 00:32:28.154 Removing: /var/run/dpdk/spdk_pid84139 00:32:28.154 Removing: /var/run/dpdk/spdk_pid84286 00:32:28.154 Removing: /var/run/dpdk/spdk_pid85083 00:32:28.154 Removing: /var/run/dpdk/spdk_pid85193 00:32:28.154 Removing: /var/run/dpdk/spdk_pid85352 00:32:28.154 Removing: /var/run/dpdk/spdk_pid85427 00:32:28.154 Removing: /var/run/dpdk/spdk_pid85702 00:32:28.154 Removing: /var/run/dpdk/spdk_pid85922 00:32:28.154 Removing: /var/run/dpdk/spdk_pid86245 00:32:28.154 Removing: /var/run/dpdk/spdk_pid86395 00:32:28.154 Removing: /var/run/dpdk/spdk_pid86526 00:32:28.154 Removing: /var/run/dpdk/spdk_pid86562 00:32:28.154 Removing: /var/run/dpdk/spdk_pid86738 00:32:28.417 Removing: /var/run/dpdk/spdk_pid86751 00:32:28.417 Removing: /var/run/dpdk/spdk_pid86788 00:32:28.417 Removing: /var/run/dpdk/spdk_pid87050 00:32:28.417 Removing: /var/run/dpdk/spdk_pid87262 00:32:28.417 Removing: /var/run/dpdk/spdk_pid87964 00:32:28.417 Removing: /var/run/dpdk/spdk_pid88706 00:32:28.417 Removing: /var/run/dpdk/spdk_pid89293 00:32:28.417 Removing: /var/run/dpdk/spdk_pid90135 00:32:28.417 Removing: /var/run/dpdk/spdk_pid90280 00:32:28.417 Removing: /var/run/dpdk/spdk_pid90358 00:32:28.417 Removing: /var/run/dpdk/spdk_pid91058 00:32:28.417 Removing: /var/run/dpdk/spdk_pid91105 00:32:28.417 Removing: /var/run/dpdk/spdk_pid91857 00:32:28.417 Removing: /var/run/dpdk/spdk_pid92333 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93119 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93246 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93278 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93336 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93387 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93446 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93648 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93718 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93768 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93824 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93854 00:32:28.417 Removing: /var/run/dpdk/spdk_pid93910 00:32:28.417 Removing: /var/run/dpdk/spdk_pid94056 00:32:28.417 Removing: /var/run/dpdk/spdk_pid94270 00:32:28.417 Removing: /var/run/dpdk/spdk_pid94933 00:32:28.417 Removing: /var/run/dpdk/spdk_pid95590 00:32:28.417 Removing: /var/run/dpdk/spdk_pid96197 00:32:28.417 Removing: /var/run/dpdk/spdk_pid96987 00:32:28.417 Clean 00:32:28.417 10:53:03 -- common/autotest_common.sh@1451 -- # return 0 00:32:28.417 10:53:03 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:28.417 10:53:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:28.417 10:53:03 -- common/autotest_common.sh@10 -- # set +x 00:32:28.417 10:53:03 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:28.417 10:53:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:28.417 10:53:03 -- common/autotest_common.sh@10 -- # set +x 00:32:28.417 10:53:03 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:28.678 10:53:03 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:28.678 10:53:03 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:28.678 10:53:03 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:28.678 10:53:03 -- spdk/autotest.sh@394 -- # hostname 00:32:28.678 10:53:03 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:28.678 geninfo: WARNING: invalid characters removed from testname! 00:32:55.272 10:53:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:57.187 10:53:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:59.734 10:53:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:02.282 10:53:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:04.828 10:53:39 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:07.373 10:53:41 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:09.278 10:53:43 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:09.278 10:53:43 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:33:09.278 10:53:43 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:33:09.278 10:53:43 -- common/autotest_common.sh@1681 -- $ lcov --version 00:33:09.278 10:53:44 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:33:09.278 10:53:44 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:33:09.278 10:53:44 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:33:09.278 10:53:44 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:33:09.278 10:53:44 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:09.278 10:53:44 -- scripts/common.sh@336 -- $ read -ra ver1 00:33:09.278 10:53:44 -- scripts/common.sh@337 -- $ IFS=.-: 00:33:09.278 10:53:44 -- scripts/common.sh@337 -- $ read -ra ver2 00:33:09.278 10:53:44 -- scripts/common.sh@338 -- $ local 'op=<' 00:33:09.278 10:53:44 -- scripts/common.sh@340 -- $ ver1_l=2 00:33:09.278 10:53:44 -- scripts/common.sh@341 -- $ ver2_l=1 00:33:09.278 10:53:44 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:33:09.278 10:53:44 -- scripts/common.sh@344 -- $ case "$op" in 00:33:09.278 10:53:44 -- scripts/common.sh@345 -- $ : 1 00:33:09.278 10:53:44 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:33:09.278 10:53:44 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.278 10:53:44 -- scripts/common.sh@365 -- $ decimal 1 00:33:09.278 10:53:44 -- scripts/common.sh@353 -- $ local d=1 00:33:09.278 10:53:44 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:09.278 10:53:44 -- scripts/common.sh@355 -- $ echo 1 00:33:09.278 10:53:44 -- scripts/common.sh@365 -- $ ver1[v]=1 00:33:09.278 10:53:44 -- scripts/common.sh@366 -- $ decimal 2 00:33:09.278 10:53:44 -- scripts/common.sh@353 -- $ local d=2 00:33:09.278 10:53:44 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:09.278 10:53:44 -- scripts/common.sh@355 -- $ echo 2 00:33:09.278 10:53:44 -- scripts/common.sh@366 -- $ ver2[v]=2 00:33:09.278 10:53:44 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:33:09.278 10:53:44 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:33:09.278 10:53:44 -- scripts/common.sh@368 -- $ return 0 00:33:09.278 10:53:44 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.278 10:53:44 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:33:09.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.278 --rc genhtml_branch_coverage=1 00:33:09.278 --rc genhtml_function_coverage=1 00:33:09.278 --rc genhtml_legend=1 00:33:09.278 --rc geninfo_all_blocks=1 00:33:09.278 --rc geninfo_unexecuted_blocks=1 00:33:09.278 00:33:09.278 ' 00:33:09.278 10:53:44 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:33:09.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.278 --rc genhtml_branch_coverage=1 00:33:09.278 --rc genhtml_function_coverage=1 00:33:09.278 --rc genhtml_legend=1 00:33:09.278 --rc geninfo_all_blocks=1 00:33:09.278 --rc geninfo_unexecuted_blocks=1 00:33:09.278 00:33:09.278 ' 00:33:09.278 10:53:44 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:33:09.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.278 --rc genhtml_branch_coverage=1 00:33:09.278 --rc genhtml_function_coverage=1 00:33:09.278 --rc genhtml_legend=1 00:33:09.278 --rc geninfo_all_blocks=1 00:33:09.278 --rc geninfo_unexecuted_blocks=1 00:33:09.278 00:33:09.278 ' 00:33:09.278 10:53:44 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:33:09.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.279 --rc genhtml_branch_coverage=1 00:33:09.279 --rc genhtml_function_coverage=1 00:33:09.279 --rc genhtml_legend=1 00:33:09.279 --rc geninfo_all_blocks=1 00:33:09.279 --rc geninfo_unexecuted_blocks=1 00:33:09.279 00:33:09.279 ' 00:33:09.279 10:53:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:09.279 10:53:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:33:09.279 10:53:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:09.279 10:53:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.279 10:53:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.279 10:53:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.279 10:53:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.279 10:53:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.279 10:53:44 -- paths/export.sh@5 -- $ export PATH 00:33:09.279 10:53:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.537 10:53:44 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:09.537 10:53:44 -- common/autobuild_common.sh@479 -- $ date +%s 00:33:09.537 10:53:44 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727520824.XXXXXX 00:33:09.537 10:53:44 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727520824.0n4B8u 00:33:09.537 10:53:44 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:33:09.537 10:53:44 -- common/autobuild_common.sh@485 -- $ '[' -n main ']' 00:33:09.537 10:53:44 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:33:09.537 10:53:44 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:33:09.537 10:53:44 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:09.537 10:53:44 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:09.537 10:53:44 -- common/autobuild_common.sh@495 -- $ get_config_params 00:33:09.537 10:53:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:33:09.537 10:53:44 -- common/autotest_common.sh@10 -- $ set +x 00:33:09.537 10:53:44 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:33:09.537 10:53:44 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:33:09.537 10:53:44 -- pm/common@17 -- $ local monitor 00:33:09.537 10:53:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:09.537 10:53:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:09.537 10:53:44 -- pm/common@25 -- $ sleep 1 00:33:09.537 10:53:44 -- pm/common@21 -- $ date +%s 00:33:09.537 10:53:44 -- pm/common@21 -- $ date +%s 00:33:09.538 10:53:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727520824 00:33:09.538 10:53:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727520824 00:33:09.538 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727520824_collect-vmstat.pm.log 00:33:09.538 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727520824_collect-cpu-load.pm.log 00:33:10.476 10:53:45 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:33:10.476 10:53:45 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:33:10.476 10:53:45 -- spdk/autopackage.sh@14 -- $ timing_finish 00:33:10.476 10:53:45 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:10.476 10:53:45 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:10.476 10:53:45 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:10.476 10:53:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:10.476 10:53:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:10.476 10:53:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:10.476 10:53:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:10.476 10:53:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:10.476 10:53:45 -- pm/common@44 -- $ pid=98675 00:33:10.476 10:53:45 -- pm/common@50 -- $ kill -TERM 98675 00:33:10.476 10:53:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:10.476 10:53:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:10.476 10:53:45 -- pm/common@44 -- $ pid=98676 00:33:10.476 10:53:45 -- pm/common@50 -- $ kill -TERM 98676 00:33:10.476 + [[ -n 5764 ]] 00:33:10.476 + sudo kill 5764 00:33:10.486 [Pipeline] } 00:33:10.502 [Pipeline] // timeout 00:33:10.507 [Pipeline] } 00:33:10.523 [Pipeline] // stage 00:33:10.529 [Pipeline] } 00:33:10.544 [Pipeline] // catchError 00:33:10.553 [Pipeline] stage 00:33:10.555 [Pipeline] { (Stop VM) 00:33:10.568 [Pipeline] sh 00:33:10.851 + vagrant halt 00:33:13.397 ==> default: Halting domain... 00:33:19.998 [Pipeline] sh 00:33:20.282 + vagrant destroy -f 00:33:22.832 ==> default: Removing domain... 00:33:23.857 [Pipeline] sh 00:33:24.140 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:24.151 [Pipeline] } 00:33:24.167 [Pipeline] // stage 00:33:24.172 [Pipeline] } 00:33:24.187 [Pipeline] // dir 00:33:24.193 [Pipeline] } 00:33:24.208 [Pipeline] // wrap 00:33:24.214 [Pipeline] } 00:33:24.227 [Pipeline] // catchError 00:33:24.238 [Pipeline] stage 00:33:24.241 [Pipeline] { (Epilogue) 00:33:24.256 [Pipeline] sh 00:33:24.542 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:29.829 [Pipeline] catchError 00:33:29.831 [Pipeline] { 00:33:29.843 [Pipeline] sh 00:33:30.126 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:30.126 Artifacts sizes are good 00:33:30.135 [Pipeline] } 00:33:30.149 [Pipeline] // catchError 00:33:30.159 [Pipeline] archiveArtifacts 00:33:30.166 Archiving artifacts 00:33:30.282 [Pipeline] cleanWs 00:33:30.293 [WS-CLEANUP] Deleting project workspace... 00:33:30.293 [WS-CLEANUP] Deferred wipeout is used... 00:33:30.300 [WS-CLEANUP] done 00:33:30.302 [Pipeline] } 00:33:30.316 [Pipeline] // stage 00:33:30.321 [Pipeline] } 00:33:30.335 [Pipeline] // node 00:33:30.340 [Pipeline] End of Pipeline 00:33:30.387 Finished: SUCCESS